-
Notifications
You must be signed in to change notification settings - Fork 251
Add a really simple way to summon new parallel functions later #2263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: mc-1.20.x
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
|
This is also available for older versions as an optional datapack. It has been tested on Fabric 1.20.1,,,,, once. |
|
I like the idea of this, but the implementation feels very weird and the drawbacks push me away from even using it. For example, if you're using a On top of that, having local function add(y, z)
print(y + z)
end
summon(add, 7, 12) --> 19The current srtup feels like it would make a lot of usecases moot, and you'd have to write some rather delicious spaghetti to get what you want working. I would propose instead a |
|
Hi, thanks for your reply :3
This is something I'm not sure how to solve. Sure, I could give different any/all rules for tasks started afterwards, but what if you don't want that - you want a subtask to end it all? Would I need to add some config object to parallel.waitFor*?
I have considered this too, but am unsure how I would work this out while still forwarding summon. Right now, summoned functions receive summon, so that they could be part of other files etc etc and you don't need to share your summon function manually. If I allow arguments, you might call, say, rednet.send, which would cause
This seems like a reasonable idea, although this would mean you pass control over when parallel tasks start to run away from your own scripts and towards the global coroutines, which is fine for most use cases, but 🤷 idk I have, for now, turned this into a draft PR as I'd like to work out some suggestions before merging this as-is. |
|
Thanks for the PR! Yeah, the coroutine scheduler design space is incredibly tricky, which is partly why I've been avoiding it! Every time I've needed a scheduler, I've ended up writing a new one from scratch, as I always need something slightly different. The version I posted in #1734 (comment) is pretty similar to the run(function(spawn)
spawn(function() for i = 1, 3 do print("A", i) sleep(0.5) end end)
spawn(function(n) for i = 1, n do print("B", i) sleep(0.5) end end, 5)
end)My gut feeling here would be to not support You could go the route of run(function(scope)
scope:spawn(function() for i = 1, 3 do print("A", i) sleep(0.5) end end)
scope:spawn_daemon(function(n) for i = 1, n do print("B", i) sleep(0.5) end end, 5)
end)
No. Programs shouldn't be able to spawn code that runs outside of their own scope — that way lies madness. |
I am still trying to work out how I'd want to manage a "hey please wait for me, I'm part of the 'all'!!" and a "oh ignore me, I don't count towards all" type of situation. I do see the use case.. |
|
I do have an alternative idea that I came up on the spot with, probably bad though: A local shouldQuit = false
parallel.waitForAll(..., function()
parallel.waitForFirst(function()
while true do
if shouldQuit then return end -- this will kill all others
os.pullEvent()
end
end, function()
-- I am unimportant. If I exit, you should not give a fuck.
-- If I decide not to exit, you should not give a fuck. Kill me.
end)
end) |
|
That could be useful, in combination with summon and waitForAny |
This case can rewritten with local shouldQuit = false
parallel.waitForAll(..., function()
xpcall(parallel.waitForAll, function(err)
if not shouldQuit then
error(err)
end
end, function()
while true do
if shouldQuit then error('exited', 0) end -- this will kill all others
os.pullEvent()
end
end, function()
-- I am unimportant. If I exit, you should not give a fuck.
-- If I decide not to exit, you should not give a fuck. Kill me.
end)
end)If you are seeking for advanced coroutine management, I'd recommend you checkout my detailsUnlike the parallel API, it does not silently drop coroutine before exit but fires a special However, yeah it does not provide any daemon flag because I currently don't have use of it (if a |
Hmm, perhaps implement the parallel.run function as a per-instance system that the user must start in their own program? Much like how many current "thread" libraries work now. -- predeclare so the queue object is visible to the program
local queue
local function a()
...
end
local function b()
...
end
local function c()
...
-- spawn a new thread
queue.spawn(a)
-- spawn a new thread, keep the object so we can manipulate it later
local x = queue.spawn(b)
...
-- More advanced methods for dealing with individual coroutines can also be added with this
-- Hypothetical methods
queue.runOnce(x, "mouse_click", ...)
queue.pause(x)
queue.unpause(x)
-- alternatively
-- x:runOnce(...)
-- x:pause()
-- x:unpause()
queue.kill(x)
-- x:kill()
queue.stop()
...
end
queue = parallel.run(c) -- Runs until `queue.stop()` or no threads are aliveIn this case, I just personally really dislike the idea of adding this to |
I feel this queue = parallel.run(c) -- Runs until `queue.stop()` or no threads are aliveis sus. Since parallel.run will block until no threads are alive, that means I also do not think |
@zyxkad This seems like a weird hack, not sure if I like this |
That's why The other functions were also just hypothetical. Maybe pause doesn't make it in for the reasons you've stated, maybe it does. But I was just showing that you could have more advanced control over each thread. |
Yeah I understand the declaration, but what I'm saying is it will be |
|
My suggestion: local parallelHost = parallel.newHost()
local id = parallelHost.add(function() end)
parallelHost.add(function() parallelHost.remove(id) end)
parallelHost.waitForAll() -- returns only after all added functions stop |
Ah woops, you're right. Probably a mix of Lupus' idea with mine would work then. You create a queue first, then tell it to run the queue. |
|
Hrmr, but maybe you want to emphasise the structured nature of this a bit more, and have a helper function to create a parallel.runHost(function(parallelHost)
local id = parallelHost.add(function() end)
parallelHost.add(function() parallelHost.remove(id) end) -- Please no remove.
end)Hey, wait a minute! Anyway, structured concurrency is great, and I'd much rather keep that approach than having a separate construct/run stages. |
@SquidDev Isn't this more or less like the original proposal, with the inclusion of a way to cancel coroutines? |
|
couldn't really think of any great solutions without overcomplicating this. waitForFirst I can do if there's interest, otherwise, I'll mark this ready for review. |
I love the idea of coroutine managers like the ones people in the Discord have created, but often times all I need is a simple "hey, create a new coroutine and call it a day" library. However, I hate always having to pull a library for this, to the point that on a server of mine, I have added one to the ROM. This is obviously subpar, as scripts made for the server now no longer work OOB on other CCT worlds or packs.
As such, I propose a backwards-API-compatible way to summon new threads after the fact. This is dead simple - it does not let you kill the thread, makes no assumptions for order (although it may, TODO? I'm not sure how this should be laid out though), and just exists for the purpose of really simple, yet dynamic function parallelism-requiring scripts that would like to stay on stdlib.
meow