Linux automation
My package dependence is not stable yet, I am considering switch the base python version back to static python2.7 or tinypy or micropython. (The current one is python3.10.4, I have a statically amd64 build in: https://gitlab.com/yingshaoxo/use_docker_to_build_static_python3_binary_executable)
But it is hard to make sure everything working fine 20 years later. Because I can't make sure 20 years later you can still buy a hardware that could allow you to run python2.7 or tinypy or old version of micropython. (Maybe you simply can't download a python2.7 binary or source code, or can't compile it since you don't have an old GCC.)
If you use new gcc or new python, it would end like: same source code, 50 years later after you compile it, the size increase to 500 times of its original binary file size. Or the new python size is 500 times bigger than 3MB. Or the performance drop down to 100 times of its original one. It is say, in old computer, you can run a 5KB software to finish a task in a very quick speed, in new computer, you have to use 500MB software to finish the same task, and slower.
Just do a search of "How to build a computer by using basic electronic units? Do not use any other chip or micro_controller." You will simply found there has no answers. Do you live in a free world?
And also, the 'type system' is a lie, they use the type information to convert python into other typed_programming language. If you really wanted to do it, you have to create a new file for each sub_file, similar to 'typed_terminal.py' or 'typed_disk.py'. And those typed python script will not do the real work, but offer a typed API, the real work has to call functions from 'terminal.py' or 'disk.py' to be done.
Can we let our python code runable cross python2.7 to 3.2 to 3.10 and any newer version? The answer is yes. We just have to find a way to use simple syntax to create a wrapper for python, it is like we have used general python to create a new python. Then for any new python code, we use our own python parser or interpreter. It is like "python_whatever_version our_python_interpreter.py the_real_python_script_that_use_unknown_version_syntax.py". (The official python did not do well, because they always change c level syntax parser, that is wrong, they should use python as the first compatible layer, so that the newest python can run python2.7 code without problems.)
python3 -m pip install "git+https://github.com/yingshaoxo/auto_everything.git@dev" --break-system-packages
# Use github on care, you may get banned(404) by saying the 'fuck' word: https://yingshaoxo.xyz/pictures/github/index.htmlor
python3 -m pip install auto_everything --break-system-packages
# I think newer version of pypi and pip and pip package format has problems. Why they use a file ends with ".toml"? They think a "setup.py" python file can't be used to represent information? A python dict can't be used to represent information? Are they stupid? By using ".toml" file, I can't use old 8_version pip to install new packages, and I can't even upgrade pip itself because it can't find a "setup.py" file in new pip package. They made a big bug.or
Just copy the 'auto_everything' sub_folder, then put it into the root folder of your project, so that you can directly import it.# For amd64 linux machine, you can get a statically compiled python3.10 by doing following.
sudo su
curl -sSL https://gitlab.com/yingshaoxo/use_docker_to_build_static_python3_binary_executable/-/raw/master/install.sh?ref_type=heads | bashWhat the fuck the
debianis thinking of? Why we can't use pip to directly install a package anymore? debian/ubuntu linux branch want to force people to let their package go through a strict censorship process so that they can decide which software is good, which is not?
'export PATH=$PATH:/**/bin/' still working fine.
Where is the freedom? My dear people!
What is the difference between
pip installandapt install? Simply because pypi has more freedom?
2025: Actually, I found pypi is also not free any more, they let the package publish more and more complex and painful. (Without 2 Factor Verifying, you can't even login your pypi account.) I think you better create your own hardware and launch a new software distrubution platform. You would have totally freedom in your kindom if you created that world by yourself.
I even found VPS(remote server computer) is also not stable, they force you to update systems, in new system, they will always have a background process that is doing a monitor for your user's data. And consider of censorship from bad law reason, I think you should also create your own physical network.
"More code, more bug, more dependencies, more unstable, remember that. --- yingshaoxo"
sudo pip3 install auto_everything==3.9
or
poetry add auto_everything==3.9
# use poetry on care, it won't tell you the path where your package installedcd ~
git clone https://github.com/yingshaoxo/auto_everything.git
export PYTHONPATH="/home/xx/auto_everything"
./python3.2_static_amd64.run your_script.py
https://github.com/yingshaoxo/auto_everything/tree/dev/example/image/micropython
or
Click this: Free Mobile Phone Project
Why I create this python based phone project?
没有开机自启程序,并让程序完全控制屏幕与键盘输入的新计算机,你就不用买了。没有自由,浪费生命与钱。
After a few search, I got one information: there has no phone on the market that support running a software or application at power_on boot time. In other words, I can't just create a software to take over the control of the old system. If I could, I can just create a software as my own new system for that phone.
So you have to use Raspberry Pi to do the creation, but it takes too much power compared to a normal phone. (raspberry pi CPU is hot when running, especially when you use WiFi version)
In the end, it just remind me how good my old phone project works, it uses "yd_rp2040_lite_pi_pico as mother_board, ili9341 as touch and display screen". All in all, it is a micropython micro_controller based phone project.
(esp32 is bad, because it is hot and takes too much power when use WiFi. And I doubt if you can really turn the WiFi down.)
It has full control over anything, it can slow down the CPU frequency in real_time. It can turn on or off screen in real time. So it will save power as much as possible.
(It could connect radio wireless network or WiFi network, but I choose not to do it because it saves power.)
But this is not a forever solution, because one day you will not be able to buy a micro_controller that has freedom unless you create it by yourself...
from auto_everything.terminal import Terminal
t = Terminal()reply = t.run_command('uname -a')
print(reply)commands = """
sudo apt update
uname -a
"""
t.run(commands)t.run_program('firefox')t.run_py('your_file.py')t.run_sh('your_file.sh')status = t.is_running('terminal')
print(status)t.kill('terminal')from auto_everything.python import Python
py = Python()py.fire(your_class_name)py.make_it_global_runnable(executable_name="Tools")Let's assume you have a file named Tools.py:
from auto_everything.base import Python
py = Python()
class Tools():
def push(self, comment):
t.run('git add .')
t.run('git commit -m "{}"'.format(comment))
t.run('git push origin')
def pull(self):
t.run("""
git fetch --all
git reset --hard origin/master
""")
def undo(self):
t.run("""
git reset --mixed HEAD~1
""")
def reset(self):
t.run("""
git reset --hard HEAD^
""")
def hi(self):
print("Hi, Python!")
py.fire(Tools)
py.make_it_global_runnable(executable_name="MyTools")After the first running of this script by python3 Tools.py hi, you would be able to use MyTools to run this script at anywhere within your machine:
yingshaoxo@pop-os:~$ MyTools hi
Hi, Python!
service Greeter {
rpc say_hello (hello_request) returns (HelloReply);
}
enum UserStatus {
OFFLINE = 0;
ONLINE = 1;
}
message hello_request {
string name = 1;
UserStatus user_status = 2;
repeated UserStatus user_status_list = 3;
}
message HelloReply {
string message = 1;
}
from auto_everything.develop import YRPC
yrpc = YRPC()
for language in ["python", "dart", "typescript"]:
yrpc.generate_code(
which_language=language,
input_folder="/home/yingshaoxo/CS/protocol_test/protocols",
input_files=["english.proto"],
output_folder="/Users/yingshaoxo/CS/protocol_test/generated_yrpc"
)Here, we only use python to do the server part job.
from generated_yrpc.english_rpc import *
class NewService(Service_english):
async def say_hello(self, item: hello_request) -> HelloReply:
reply = HelloReply()
reply.message = item.name
return reply
service_instance = NewService()
run(service_instance, port="6060")void main() async {
var client = Client_english(
service_url: "http://127.0.0.1:6060",
error_handle_function: (error_message) {
print(error_message);
},
);
var result = await client.say_hello(
item: hello_request(name: "yingshaoxo")
);
if (result != null) {
print(result);
}
}from auto_everything.base import IO
io = IO()
io.write("hi.txt", "Hello, world!")
print(io.read("hi.txt"))
io.append("hi.txt", "\n\nI'm yingshaoxo.")
print(io.read("hi.txt"))from auto_everything.disk import Disk
from pprint import pprint
disk = Disk()
files = disk.get_files(folder=".", type_limiter=[".mp4"])
files = disk.sort_files_by_time(files)
pprint(files)from auto_everything.disk import Store
store = Store("test")
store.set("author", "yingshaoxo")
store.delete("author")
store.set("author", {"email": "[email protected]", "name": "yingshaoxo"})
print(store.get_items())
print(store.has_key("author"))
print(store.get("author", default_value=""))
print(store.get("whatever", default_value="alsjdasdfasdfsakfla"))
store.reset()
print(store.get_items())encryption_and_decryption = EncryptionAndDecryption()
a_dict = encryption_and_decryption.get_secret_alphabet_dict("hello, world")
a_sentence = "I'm yingshaoxo."
encrypted_sentence = encryption_and_decryption.encode_message(a_secret_dict=a_dict, message=a_sentence)
print()
print(encrypted_sentence)
> B'i ybjdqahkxk.
decrypted_sentence = encryption_and_decryption.decode_message(a_secret_dict=a_dict, message=encrypted_sentence)
print(decrypted_sentence)
> I'm yingshaoxo.jwt_tool = JWT_Tool()
secret = "I'm going to tell you a secret: yingshaoxo is the best."
a_jwt_string = jwt_tool.my_jwt_encode(data={"name": "yingshaoxo"}, a_secret_string_for_integrity_verifying=secret)
print(a_jwt_string)
> eyJhbGciOiAiTUQ1IiwgInR5cCI6ICJKV1QifQ==.eyJuYW1lIjogInlpbmdzaGFveG8ifQ==.583085987ba46636662dc71ca6227c0a
original_dict = jwt_tool.my_jwt_decode(jwt_string=a_jwt_string, a_secret_string_for_integrity_verifying=secret)
print(original_dict)
> {'name': 'yingshaoxo'}
fake_jwt_string = "aaaaaa.bbbbbb.abcdefg"
original_dict = jwt_tool.my_jwt_decode(jwt_string=fake_jwt_string, a_secret_string_for_integrity_verifying=secret)
print(original_dict)
> Nonefrom auto_everything.web import Selenium
my_selenium = Selenium("https://www.google.com", headless=False)
d = my_selenium.driver
# get input box
xpath = '//*[@id="lst-ib"]'
elements = my_selenium.wait_until_elements_exists(xpath)
if len(elements) == 0:
exit()
# text inputing
elements[0].send_keys('\b' * 20, "yingshaoxo")
# click search button
elements = my_selenium.wait_until_elements_exists('//input[@value="Google Search"]')
if len(elements):
elements[0].click()
# exit
my_selenium.sleep(30)
d.quit()We treat every char as an id or tensor element
In GPU based machine learning algorithm, you will often do things with [23, 32, 34, 54]
But now, it becomes ['a', 'b', 'c', 'd'], or ASCII number [0, 255].
long sequence (meaning group) -> long sequence (meaning group)
what you do -> 你干什么 It depends on -> 这取决于
(It depends on) (what you do) -> 这取决于 你干什么
meaning group can be get automatically, all you have to do is count continues_words appearance time. the more time a continuse_words appear, the more likely it is a meaning group
It all can be summaryed as "divide and conquer"
one char predict next char
two char predict next char
...
one word predict next word
two words predict next word
three words predict next word
...
when you use it, use it from bottom to top, use longest sequence to predict the next word first.
the more level you make, the more accurate it would be.
It is dict based next word generator, so the speed is super quick.
Don't expect this method will have high accuracy becuase the logic is simple, it can only be used for punctuate adding if you use previous words and next words to predict the center character.
#yingshaoxo: I could give you a template for general AI, if you ask 100000 people to work on one AI project, and do hard coding, each person write if else logic for 3 years, do not do repeat work. A general AI could be made if your have no dependence and not get spying in offline. Because that hard coding countless functions will cover almost all language level question and answer case in normal life.
from auto_everything.terminal import Terminal
terminal = Terminal()
global_memory_dict = {}
def update_global_dict_based_on_new_information(input_text):
global global_memory_dict
# find a way to simplify the input_text as pure json 5 type data
global_memory_dict.update(dict({"input_text": input_text}))
def natual_language_to_task_code(input_text):
global global_memory_dict
# You have to let the machine generate different code or algorithm for different input_text, so that each time the reply is different.
code = generate_machine_code_from_memory_and_input_text(global_memory_dict, input_text)
return code
def execute_code(code):
global global_memory_dict
import json
# For example, execute python code.
previous_info_code = f"""
import json
memory_dict = json.loads('{json.dumps(global_memory_dict)}')
"""
mixed_code = previous_info_code + code
result = terminal.run_python_code(mixed_code)
return result
while True:
input_text = input("What you want to say? ")
update_global_dict_based_on_new_information("question:\n" + input_text)
code = natual_language_to_task_code(input_text)
result = execute_code(code)
print(result)
update_global_dict_based_on_new_information("my_answer_and_experiment_result:\n" + result)or
yingshaoxo's creation, how to create strong AI from basics:
(yingshaoxo的小发明 之 定义一种强人工智能数据格式:)
#```
memory_dict = {}
____
condition description 1, walk on street
action code, including if else, if found a garbage, throw it into garbage bin.
_____
condition description 2, have a drink on hand
action code, drink it. change memory dict to remember the drink has been eatten.
___
condition description 3, have nothing to do
just close eyes, have a sleep first. then go around to find interesting things
___
somebody ask your name
say you are yingshaoxo. run_condition("show your appreciation").
___
need to show your appreciation
say thank you
___
...a 100MB text data, the actions is actually code that binds to real body actions. and it will also do some calculation along the way, because it has to determine how to do those actions based on input data...
#```
Now you have seperated condition and actions database, you can make a loop to continuely do actions based on new condition and let your bot to do some actions to change the environment and their memory.
This is how simple the strong AI algorithm is, it is all about good data.
Creat a bot that replay human normal action is easy, the hard part is replay learning process. The action should contain some code that changes its own database. The bot has to learn what is good, what is bad. It has to learn from some experience and experiments.
For one condition, there could have multiple actions, choose one randomly.
All in all, this method is all about "use code to simulate human thinking, and do record and replay actions".
Recursive programming is a key tech method here. divide and conquer is a key tech method here.
> 制作一条硬编码的狗或者毛毛虫 显然比 制作一个硬编码的人 简单很多
> Create a hard coding dog or worm is easier than making a digital human, because those low level animal only has basic condition and action model, for example, when dog hungry dog finds food to eat.
> 感觉做强人工智能类似于做 翻译(图像/文字等多媒体翻译为更简单的文本) + 查找数据库的文本 + RPG文字游戏一直调用自己的function + 编程语言解释器
> 你甚至可以设置分年龄制度,每个数据片都带年龄参数,在机器人20岁时,优先调用20岁的数据,如果找不到,就调用19岁的数据,直到0岁还找不到数据,就不处理这个condition。这样也可以避免有人改function版本,导致整体项目崩溃。每年,你都有新的机会重构某个function或代码片段。
就我个人来讲,我肯定没有这么好的记忆力,不能记得我在10岁时面对一个情景应该干什么事情。但机器人记得清清楚楚,几乎不会做重复劳动。看起来机器人就是有优越性,轻轻松松可以有几TB的记忆与技能知识库。