in root directory
cargo buildif you want to run inference applications, you need to download the model first. The models are
if you want to run ResNet application, please create a checkpoint folder under tests/apps/ and run the training application first
in root directory
NETWORK_CONFIG="path/to/config.toml" cargo run serverin tests/apps directory
NETWORK_CONFIG="path/to/config.toml" ./run.sh <target-python> <params>
# e.g. ./run.sh infer/gpt2/inference.py 1 4information about configuratoins when running server can be found in integraton test part in README
epoch:1 , batch_size:4
remoting output:
["Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I"]
local output:
["Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I", "Hello, I'm a language model, not a programming language. I'm a language model. I"]
remoting output:
["The primary language of the United States is English.","The primary language of the United States is English.","The primary language of the United States is English.","The primary language of the United States is English."]
local output:
["The primary language of the United States is English.","The primary language of the United States is English.","The primary language of the United States is English.","The primary language of the United States is English."]

