[1] Weil S.Ceph: reliable, scalable, and high-performance distributed storage[D]. California: University of California at Santa Cruz, 2007. [2] Weil S, Leung A, Brandt S, et al.RADOS: A scalable, reliable storage service for petabyte-scale storage clusters[C]// Gibson G A. Proceedings of the 2nd International Petascale Data Storage Workshop. Reno Nevada: Association for Computing Machinery, 2007: 35-44. [3] Weil S, Brandt S, Miller E, et al.Ceph: A Scalable, High-Performance Distributed File System[C]// Bershad B. Symposium on Operating Systems Design & Implementation. California: USENIX Association, 2006: 307-320. [4] Weil S, Brandt S, Miller E, et al.CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data[C]// Miller B H. IEEE Sc Conference. New York: ACM, 2006: 122-130. [5] Weil S, Pollack K, Brandt S, et al.Dynamic Metadata Management for Petabyte-Scale File Systems[C]// Huskamp J. conference on high performance computing. Washington: IEEE Computer Society, 2004: 4-4. [6] Oh M, Eom J, Yoon J, et al.Performance Optimization for All Flash Scale-Out Storage[C]// Kim S. IEEE International Conference on Cluster Computing. Taipei: IEEE, 2016: 316-325. [7] Michael B, David D, Sarp O, et al.Asynchronous object storage with QoS for scientific and commercial big data[C]// Hildebrand D. International Conference for High Performance Computing, Networking, Storage and Analysis. Denver Colorado: Association for Computing Machinery, 2013: 7-13. [8] Zhang X, Wang Y, Wang Q, et al.A New Approach to Double I/O Performance for Ceph Distributed File System in Cloud Computing[C]// Groves B. 2019 2nd International Conference on Data Intelligence and Security. South Padre Island: IEEE, 2019: 68-75. [9] 刘鑫伟. 基于Ceph分布式存储系统副本一致性研究[D].湖北:华中科技大学,2016. [10] 姚朋成. Ceph异构存储优化机制研究[D].重庆:重庆邮电大学,2019. [11] Zhang J, Wu Y, Chung Y C.PROAR: A Weak Consistency Model for Ceph[C]// Jia X H. IEEE International Conference on Parallel & Distributed Systems. WuHan: IEEE, 2017: 347-353. [12] Hilmi M, Mulyana E, Hendrawan H, et al.Analysis of Network Capacity Effect on Ceph Based Cloud Storage Performance[C]// Garnida H. 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications. Indonesia: IEEE, 2019: 22-24. [13] Yusuf I N, Mulyana E, Hendrawan H, et al.Utilizing CRUSH Algorithm on Ceph to Build a Cluster of Reliable Data Storage[C]// Arseno D. international conference on telecommunication systems services and applications. Indonesia: IEEE, 2019: 17-21. [14] Zhan K, Xu L, Yuan Z, et al.Performance Optimization of Large Files Writes to Ceph Based on Multiple Pipelines Algorithm[C]// Hagersten E 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications. Melbourne: IEEE, 2018: 525-532. [15] Fan Y, Wang Y, Ye M.An Improved Small File Storage Strategy in Ceph File System[C]// Guo P. 2018 14th International Conference on Computational Intelligence and Security (CIS). Hangzhou: IEEE Computer Society, 2018: 488-491. [16] 邵曦煜, 李京, 周志强. 一种Ceph块设备跨集群迁移算法[J]. 中国科学技术大学学报, 2018, 48(9): 61-67. [17] Oh M, Park S, Yoon J, et al.Design of Global Data Deduplication for a Scale-Out Distributed Storage System[C]// Dinu F. international conference on distributed computing systems. Vienna: IEEE, 2018: 1063-1073. [18] Zhan K, Piao A H.Optimization of Ceph Reads/Writes Based on Multi-threaded Algorithms[C]// Brownlee N. high performance computing and communications. Sydney: IEEE, 2016: 719-725. |