Open
Description
nb_log_config.py
LOG_FILE_SIZE = 0.5 # 单位是M,每个文件的切片大小,超过多少后就自动切割
LOG_FILE_BACKUP_COUNT = 5 # 对同一个日志文件,默认最多备份几个文件,超过就删除了。
code:
import datetime
import threading
import time
from multiprocessing import Process
import nb_log
current_time = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
log_filename = f'script_{current_time}.log'
log = nb_log.LogManager("script", logger_cls=nb_log.CompatibleLogger).get_logger_and_add_handlers(
log_filename=log_filename)
def print_log(i):
for _ in range(10000):
log.info(f'thread_{i}: {_}')
time.sleep(0.0001)
def f():
thread_list = []
for i in range(1):
t = threading.Thread(target=print_log, args=(f'{i}',), name=f'thread_{i}')
t.start()
thread_list.append(t)
for t in thread_list:
t.join()
if __name__ == '__main__':
for i in range(10):
Process(target=f, name=f"Process_{i}").start()
The generated log file name is as follows:
├── script_2025-04-03_14-23-15.log
├── script_2025-04-03_14-23-15.log.1
├── script_2025-04-03_14-23-15.log.2
├── script_2025-04-03_14-23-15.log.3
├── script_2025-04-03_14-23-15.log.4
└── script_2025-04-03_14-23-15.log.5
My expectations are as follows:
├── script_2025-04-03_14-23-15.log
├── script_2025-04-03_14-23-20.log.1
├── script_2025-04-03_14-23-25.log.2
├── script_2025-04-03_14-23-30.log.3
├── script_2025-04-03_14-23-35.log.4
└── script_2025-04-03_14-23-40.log.5
how that should be achieved?
Metadata
Metadata
Assignees
Labels
No labels