Skip to content

Commit dc78f33

Browse files
committed
add prophet-rocksdb
1 parent 7a9ecda commit dc78f33

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+2899
-255
lines changed

Diff for: .gitignore

+3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
make_config.mk
22
rocksdb.pc
33

4+
out.txt
5+
6+
47
*.a
58
*.arc
69
*.d

Diff for: Makefile

+1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ BASH_EXISTS := $(shell which bash)
1010
SHELL := $(shell which bash)
1111
include common.mk
1212

13+
USE_RTTI = 1
1314
CLEAN_FILES = # deliberately empty, so we can append below.
1415
CFLAGS += ${EXTRA_CFLAGS}
1516
CXXFLAGS += ${EXTRA_CXXFLAGS}

Diff for: README.md

+29-22
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,38 @@
1-
## RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
1+
## Prophet
22

3-
[![CircleCI Status](https://circleci.com/gh/facebook/rocksdb.svg?style=svg)](https://circleci.com/gh/facebook/rocksdb)
4-
[![Appveyor Build status](https://ci.appveyor.com/api/projects/status/fbgfu0so3afcno78/branch/main?svg=true)](https://ci.appveyor.com/project/Facebook/rocksdb/branch/main)
5-
[![PPC64le Build Status](http://140-211-168-68-openstack.osuosl.org:8080/buildStatus/icon?job=rocksdb&style=plastic)](http://140-211-168-68-openstack.osuosl.org:8080/job/rocksdb)
3+
Build Prophet:
64

7-
RocksDB is developed and maintained by Facebook Database Engineering Team.
8-
It is built on earlier work on [LevelDB](https://github.com/google/leveldb) by Sanjay Ghemawat ([email protected])
9-
and Jeff Dean ([email protected])
5+
Please make sure you have installed the required dependencies in [RocksDB](https://github.com/facebook/rocksdb/blob/main/INSTALL.md) and replace `<zoned block device>` to real ZNS SSD device name.
106

11-
This code is a library that forms the core building block for a fast
12-
key-value server, especially suited for storing data on flash drives.
13-
It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs
14-
between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF)
15-
and Space-Amplification-Factor (SAF). It has multi-threaded compactions,
16-
making it especially suitable for storing multiple terabytes of data in a
17-
single database.
7+
```bash
8+
sudo git clone https://github.com/Flappybird11101001/prophet-rocksdb.git rocksdb
9+
cd rocksdb
10+
sudo git clone https://github.com/Flappybird11101001/prophet-zenfs.git plugin/zenfs
11+
sudo DISABLE_WARNING_AS_ERROR=1 ROCKSDB_PLUGINS=zenfs make -j db_bench install DEBUG_LEVEL=0
12+
pushd .
13+
cd plugin/zenfs/util
14+
sudo make
15+
popd
16+
```
1817

19-
Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples
18+
initialize ZNS SSD device
2019

21-
See the [github wiki](https://github.com/facebook/rocksdb/wiki) for more explanation.
20+
```bash
21+
echo deadline > /sys/class/block/<zoned block device>/queue/scheduler
22+
sudo ./plugin/zenfs/util/zenfs mkfs --zbd=<zoned block device> --aux_path=./temp --force
23+
```
2224

23-
The public interface is in `include/`. Callers should not include or
24-
rely on the details of any other header files in this package. Those
25-
internal APIs may be changed without warning.
25+
# Benchmark
2626

27-
Questions and discussions are welcome on the [RocksDB Developers Public](https://www.facebook.com/groups/rocksdb.dev/) Facebook group and [email list](https://groups.google.com/g/rocksdb) on Google Groups.
27+
run db_bench to test(the same config with paper in 64MB SST file size).
2828

29-
## License
29+
```bash
30+
sudo ./db_bench -num=400000000 -key_size=8 -value_size=256 -statistics=true -max_bytes_for_level_base=268435456 -target_file_size_base=67108864 -write_buffer_size=134217728 writable_file_max_buffer_size=134217728 -max_bytes_for_level_multiplier=4 -max_background_compactions=1 -max_background_flushes=1 -max_background_jobs=1 -soft_pending_compaction_bytes_limit=67108864 -hard_pending_compaction_bytes_limit=67108864 -level0_stop_writes_trigger=12 -level0_slowdown_writes_trigger=8 -level0_file_num_compaction_trigger=4 -max_write_buffer_number=1 -threads=1 -compaction_pri=4 -open_files=1000 -target_file_size_multiplier=1 --fs_uri=zenfs://dev:<zoned block device> --benchmarks='fillrandom,stats' --use_direct_io_for_flush_and_compaction
31+
```
3032

31-
RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.
33+
34+
![allocation_migrated_data](./allocation_migrated_data.jpg)
35+
36+
![allocation_wa](./allocation_wa.jpg)
37+
38+
![allocation_zone_number_page-0001](./allocation_zone_number.jpg)

Diff for: allocation_migrated_data.jpg

149 KB
Loading

Diff for: allocation_wa.jpg

149 KB
Loading

Diff for: allocation_zone_number.jpg

211 KB
Loading

Diff for: clear.sh

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
rm -f level.out
2+
rm -f lifetime.out
3+
rm -f number_life.out
4+
rm -f factor.out
5+
rm -f last_compact.out
6+
rm -f rank.out
7+
rm -rf clock.out

Diff for: clock_pic.py

+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import matplotlib.pyplot as plt
2+
import numpy as np
3+
4+
prev_list = []
5+
tmp_prev_flush_list = []
6+
prev_flush_list = []
7+
type_list = []
8+
tot = 0
9+
for line in open("clock.out"):
10+
tot = tot + 1
11+
if(tot != 1):
12+
prev_list.append(int(line.split(' ')[0]))
13+
tmp_prev_flush_list.append(int(line.split(' ')[1]))
14+
type_list.append(int(line.split(' ')[2]))
15+
16+
17+
y = np.array(prev_list)
18+
plt.hist(prev_list, bins=100, color="brown")
19+
plt.show()
20+
21+
22+
tot = 0
23+
for i in range(0, len(type_list)):
24+
tot = tot + 1
25+
if i + 1 < len(type_list) and type_list[i] == 2 and type_list[i + 1] == 1:
26+
prev_flush_list.append(tmp_prev_flush_list[i])
27+
28+
29+
# y = np.array(prev_flush_list)
30+
plt.hist(prev_flush_list, bins=100, color="brown")
31+
plt.show()

Diff for: db/builder.cc

+29-2
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,12 @@
4343

4444
namespace ROCKSDB_NAMESPACE {
4545

46+
extern void get_predict(int level, const FileMetaData &file, Version *v, const Compaction* compaction_, int &predict_, int &predict_type_, int &tmp_rank);
47+
extern void set_deleted_time(int fnumber, int clock);
48+
extern void update_fname(uint64_t id, std::string name);
49+
extern std::string get_fname(uint64_t id);
50+
extern int get_clock();
51+
4652
class TableFactory;
4753

4854
TableBuilder* NewTableBuilder(const TableBuilderOptions& tboptions,
@@ -147,10 +153,31 @@ Status BuildTable(
147153
bool use_direct_writes = file_options.use_direct_writes;
148154
TEST_SYNC_POINT_CALLBACK("BuildTable:create_file", &use_direct_writes);
149155
#endif // !NDEBUG
150-
IOStatus io_s = NewWritableFile(fs, fname, &file, file_options);
156+
//file_options.lifetime = 1000;
157+
FileOptions tmp_file_options = file_options;
158+
tmp_file_options.lifetime = 100;
159+
160+
update_fname(meta->fd.GetNumber(), fname);
161+
//在这里写入
162+
IOStatus io_s = NewWritableFile(fs, fname, &file, tmp_file_options);
163+
164+
165+
int predict;
166+
int predict_type;
167+
int rank;
168+
const int output_level = 0;
169+
170+
get_predict(output_level, *meta, versions->GetColumnFamilySet()->GetDefault()->current(), nullptr, predict, predict_type, rank);
171+
set_deleted_time(meta->fnumber, predict + get_clock());
172+
printf("meta->fname=%s get_clock=%d lifetime=%d\n", fname.c_str(), get_clock(), predict + get_clock());
173+
fs->SetFileLifetime(fname, predict + get_clock(), get_clock(), 0, output_level, std::vector<std::string> {});
174+
175+
176+
177+
151178
assert(s.ok());
152179
s = io_s;
153-
if (io_status->ok()) {
180+
if (io_status->ok()) {
154181
*io_status = io_s;
155182
}
156183
if (!s.ok()) {

Diff for: db/column_family.cc

+1-1
Original file line numberDiff line numberDiff line change
@@ -1118,7 +1118,7 @@ Compaction* ColumnFamilyData::PickCompaction(
11181118
imm_.current()->GetEarliestSequenceNumber(false));
11191119
auto* result = compaction_picker_->PickCompaction(
11201120
GetName(), mutable_options, mutable_db_options, current_->storage_info(),
1121-
log_buffer, earliest_mem_seqno);
1121+
log_buffer, earliest_mem_seqno); //PickCompaction来选择需要被Compact的文件
11221122
if (result != nullptr) {
11231123
result->SetInputVersion(current_);
11241124
}

Diff for: db/compaction/compaction.h

+2
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ struct AtomicCompactionUnitBoundary {
4949
const InternalKey* largest = nullptr;
5050
};
5151

52+
//使用此结构维护同一个level中所有的SST files
5253
// The structure that manages compaction input files associated
5354
// with the same physical level.
5455
struct CompactionInputFiles {
@@ -438,6 +439,7 @@ class Compaction {
438439
bool l0_files_might_overlap_;
439440

440441
// Compaction input files organized by level. Constant after construction
442+
//Compaction中的输入变量
441443
const std::vector<CompactionInputFiles> inputs_;
442444

443445
// A copy of inputs_, organized more closely in memory

0 commit comments

Comments
 (0)