-
Notifications
You must be signed in to change notification settings - Fork 59
Open
Description
What would be a reasonable strategy for scaling the Feedsim dataset to leverage large-memory systems, such as one with 256 cores and 2 TB of RAM (i.e., 8 GB per core)?
Specifically, would it be appropriate to increase parameters like "graph_scale" and/or "num_objects" to better utilize the available memory?
Also, what are suggested values for parameters that scale the workload meaningfully without distorting its intended behavior or characteristics?
Starting leaf node service
monitor_port=$((port-1000))
MALLOC_CONF=narenas:20,dirty_decay_ms:5000 build/workloads/ranking/LeafNodeRank \
--port="$port" \
--monitor_port="$monitor_port" \
--graph_scale=21 \
--graph_subset=2000000 \
--threads="$thrift_threads" \
--cpu_threads="$ranking_cpu_threads" \
--timekeeper_threads=2 \
--io_threads="$EVENTBASE_THREADS_DEFAULT" \
--srv_threads="$SRV_THREADS_DEFAULT" \
--srv_io_threads="$srv_io_threads" \
--num_objects=2000 \
--graph_max_iters=1 \
--noaffinity \
--min_icache_iterations="$icache_iterations" &
Metadata
Metadata
Assignees
Labels
No labels