Skip to content

Conversation

@tizu69
Copy link
Contributor

@tizu69 tizu69 commented Aug 11, 2025

I love the idea of coroutine managers like the ones people in the Discord have created, but often times all I need is a simple "hey, create a new coroutine and call it a day" library. However, I hate always having to pull a library for this, to the point that on a server of mine, I have added one to the ROM. This is obviously subpar, as scripts made for the server now no longer work OOB on other CCT worlds or packs.

As such, I propose a backwards-API-compatible way to summon new threads after the fact. This is dead simple - it does not let you kill the thread, makes no assumptions for order (although it may, TODO? I'm not sure how this should be laid out though), and just exists for the purpose of really simple, yet dynamic function parallelism-requiring scripts that would like to stay on stdlib.

meow

Copy link

@scmcgowen scmcgowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 11, 2025

This is also available for older versions as an optional datapack. It has been tested on Fabric 1.20.1,,,,, once.

parallel-extensions.zip

@fatboychummy
Copy link
Contributor

fatboychummy commented Aug 12, 2025

I like the idea of this, but the implementation feels very weird and the drawbacks push me away from even using it. For example, if you're using a waitForAny, you cannot summon a task which ends without ending the parent tasks as well. Similarly, all child processes must now exit in order for waitForAll to exit.

On top of that, having summon take multiple parameters but only of type function feels weird. I would rather "summon" each thread individually, especially so since you could then refactor the method to push any inputs to the function, i.e:

local function add(y, z)
  print(y + z)
end

summon(add, 7, 12) --> 19

The current srtup feels like it would make a lot of usecases moot, and you'd have to write some rather delicious spaghetti to get what you want working.

I would propose instead a parallel.run function that is added to the main coroutines (alongside rednet.run and etc). Then, another function parallel.spawn that just adds coroutines to the parallel.run queue.

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 13, 2025

Hi, thanks for your reply :3

For example, if you're using a waitForAny, you cannot summon a task which ends without ending the parent tasks as well. Similarly, all child processes must now exit in order for waitForAll to exit.

This is something I'm not sure how to solve. Sure, I could give different any/all rules for tasks started afterwards, but what if you don't want that - you want a subtask to end it all? Would I need to add some config object to parallel.waitFor*?

having summon take multiple parameters but only of type function feels weird.
summon mimics the arguments of the parallel.waitFor* functions.

I would rather "summon" each thread individually, especially so since you could then refactor the method to push any inputs to the function, [...]

I have considered this too, but am unsure how I would work this out while still forwarding summon. Right now, summoned functions receive summon, so that they could be part of other files etc etc and you don't need to share your summon function manually. If I allow arguments, you might call, say, rednet.send, which would cause <recipient> to be summon function. If I don't default-forward the summon function, you're going to need to create another function either way, to forward summon as needed. This seems like an endless debate - I am happy to implement this once I (or someone else) has found a good solution that fits different usecases - calling external functions, forwarding summon, ...

I would propose instead a parallel.run function that is added to the main coroutines (alongside rednet.run and etc). Then, another function parallel.spawn that just adds coroutines to the parallel.run queue.

This seems like a reasonable idea, although this would mean you pass control over when parallel tasks start to run away from your own scripts and towards the global coroutines, which is fine for most use cases, but 🤷 idk

I have, for now, turned this into a draft PR as I'd like to work out some suggestions before merging this as-is.

@tizu69 tizu69 marked this pull request as draft August 13, 2025 09:08
@SquidDev
Copy link
Member

Thanks for the PR! Yeah, the coroutine scheduler design space is incredibly tricky, which is partly why I've been avoiding it! Every time I've needed a scheduler, I've ended up writing a new one from scratch, as I always need something slightly different.

The version I posted in #1734 (comment) is pretty similar to the waitForAll version you've got here, with the main difference being that the spawn (or summon) function is only passed to the initial function:

run(function(spawn)
  spawn(function() for i = 1, 3 do print("A", i) sleep(0.5) end end)
  spawn(function(n) for i = 1, n do print("B", i) sleep(0.5) end end, 5)
end)

My gut feeling here would be to not support waitForAny at all — most of the time when you need a "race" function, it's not dynamic.

You could go the route of eio, and have separate methods to spawn normal and "daemon" coroutines. This is definitely useful IRL, but I'm not sure about here — that's half the problem, lots of things make sense in a fully-featured library like eio or trio, but less so in CC!

run(function(scope)
  scope:spawn(function() for i = 1, 3 do print("A", i) sleep(0.5) end end)
  scope:spawn_daemon(function(n) for i = 1, n do print("B", i) sleep(0.5) end end, 5)
end)

I would propose instead a parallel.run function that is added to the main coroutines (alongside rednet.run and etc)

No. Programs shouldn't be able to spawn code that runs outside of their own scope — that way lies madness.

@SquidDev SquidDev added enhancement An extension of a feature or a new feature. area-CraftOS This affects CraftOS, or any other Lua portions of the mod. labels Aug 13, 2025
@tizu69
Copy link
Contributor Author

tizu69 commented Aug 13, 2025

My gut feeling here would be to not support waitForAny at all — most of the time when you need a "race" function, it's not dynamic.
Should this be something explicitly blocked? I have, as of right now, mainly allowed it for the purposes of "why the fuck not?", given its the same function, it might be easier just not to care. Maybe people may find a use?

I am still trying to work out how I'd want to manage a "hey please wait for me, I'm part of the 'all'!!" and a "oh ignore me, I don't count towards all" type of situation. I do see the use case..

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 13, 2025

I do have an alternative idea that I came up on the spot with, probably bad though: A waitForFirst. It's a waitForAllOrTheFirstOne, essentially.
Idea here would be that if you want something that should not be counted towards the "all"ness of waitForAll, you put something like this. this probably comes with tons of drawbacks tho;

local shouldQuit = false
parallel.waitForAll(..., function()
	parallel.waitForFirst(function()
		while true do
			if shouldQuit then return end -- this will kill all others
			os.pullEvent()
		end
	end, function()
		-- I am unimportant. If I exit, you should not give a fuck.
		-- If I decide not to exit, you should not give a fuck. Kill me.
	end)
end)

@scmcgowen
Copy link

That could be useful, in combination with summon and waitForAny

@zyxkad
Copy link
Contributor

zyxkad commented Aug 14, 2025

I do have an alternative idea that I came up on the spot with, probably bad though: A waitForFirst. It's a waitForAllOrTheFirstOne, essentially.

This case can rewritten with waitForAll

local shouldQuit = false
parallel.waitForAll(..., function()
	xpcall(parallel.waitForAll, function(err)
		if not shouldQuit then
			error(err)
		end
	end, function()
		while true do
			if shouldQuit then error('exited', 0) end -- this will kill all others
			os.pullEvent()
		end
	end, function()
		-- I am unimportant. If I exit, you should not give a fuck.
		-- If I decide not to exit, you should not give a fuck. Kill me.
	end)
end)

If you are seeking for advanced coroutine management, I'd recommend you checkout my coroutinex. It's a library to simulate JavaScript's Promise API. It provides a fairly complex coroutine runtime, but contains log system for easier debugging. I'd admit its API does not fit Lua style (and sometimes even counterintuitive) because I am not an advancer in Lua.

details

Unlike the parallel API, it does not silently drop coroutine before exit but fires a special terminate event to allow subthread/subprocess to do any final cleanup, and even possible to refuse the termination if the stop is requested by another coroutine.

However, yeah it does not provide any daemon flag because I currently don't have use of it (if a terminate event fires outside of the runtime, all of the coroutines will receive the terminate event and shut immediately, unless you defined a special handler for the event).

@fatboychummy
Copy link
Contributor

fatboychummy commented Aug 14, 2025

- @SquidDev
No. Programs shouldn't be able to spawn code that runs outside of their own scope — that way lies madness.

Hmm, perhaps implement the parallel.run function as a per-instance system that the user must start in their own program? Much like how many current "thread" libraries work now.

-- predeclare so the queue object is visible to the program
local queue

local function a()
  ...
end

local function b()
  ...
end

local function c()
  ...

  -- spawn a new thread
  queue.spawn(a)

  -- spawn a new thread, keep the object so we can manipulate it later
  local x = queue.spawn(b)

  ...

  -- More advanced methods for dealing with individual coroutines can also be added with this
  -- Hypothetical methods
  queue.runOnce(x, "mouse_click", ...)
  queue.pause(x)
  queue.unpause(x)
  -- alternatively
  -- x:runOnce(...)
  -- x:pause()
  -- x:unpause()

  queue.kill(x)
  -- x:kill()

  queue.stop()
  ...
end

queue = parallel.run(c) -- Runs until `queue.stop()` or no threads are alive

In this case, c would get ran, then a and b would get added. Might be a nicer way to do this, but this way parallel.run can be per-program while still allowing new threads.

I just personally really dislike the idea of adding this to parallel.waitForAny/All -- they already serve their own purpose and they do it very well. Thus, instead of bloating an existing method, I think it may be better to introduce new ones.

@zyxkad
Copy link
Contributor

zyxkad commented Aug 14, 2025

Hmm, perhaps implement the parallel.run function as a per-instance system that the user must start in their own program? Much like how many current "thread" libraries work now.

I feel this

queue = parallel.run(c) -- Runs until `queue.stop()` or no threads are alive

is sus. Since parallel.run will block until no threads are alive, that means queue will never be assigned before all threads are done? I guess a better way is pass the queue as an argument to c. Or do not block at parallel.run (and rename run to createQueue or sth.) but have a method waitForAll on the queue.

I also do not think pause and unpause is a good idea, since it is very easy to be used incorrectly. For example, if you are paused while waiting for a specific event (e.g. timer / main thread task), you will miss that event. Or if you build an internal queue to cache the events to prevent event lost, you possibly will run out of memory as a task is paused for a long time.

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 14, 2025

This case can rewritten with waitForAll

@zyxkad This seems like a weird hack, not sure if I like this

@fatboychummy
Copy link
Contributor

fatboychummy commented Aug 14, 2025

Since parallel.run will block until no threads are alive, that means queue will never be assigned before all threads are done?

That's why queue is predeclared at the very top. This is a common pattern in programming.

The other functions were also just hypothetical. Maybe pause doesn't make it in for the reasons you've stated, maybe it does. But I was just showing that you could have more advanced control over each thread.

@zyxkad
Copy link
Contributor

zyxkad commented Aug 15, 2025

Since parallel.run will block until no threads are alive, that means queue will never be assigned before all threads are done?

That's why queue is predeclared at the very top. This is a common pattern in programming.

Yeah I understand the declaration, but what I'm saying is it will be nil till parallel.run returned.

@Lupus590
Copy link
Contributor

Lupus590 commented Aug 15, 2025

My suggestion:

local parallelHost = parallel.newHost()
local id = parallelHost.add(function() end)
parallelHost.add(function() parallelHost.remove(id) end)
parallelHost.waitForAll() -- returns only after all added functions stop

@fatboychummy
Copy link
Contributor

Yeah I understand the declaration, but what I'm saying is it will be nil till parallel.run returned.

Ah woops, you're right. Probably a mix of Lupus' idea with mine would work then. You create a queue first, then tell it to run the queue.

@SquidDev
Copy link
Member

Hrmr, but maybe you want to emphasise the structured nature of this a bit more, and have a helper function to create a parallelHost and then run its contents, a bit like you might have a with_file("file.txt", "r", function(h) return h.readAll()) helper...

parallel.runHost(function(parallelHost)
  local id = parallelHost.add(function() end)
  parallelHost.add(function() parallelHost.remove(id) end) -- Please no remove.
end)

Hey, wait a minute!


Anyway, structured concurrency is great, and I'd much rather keep that approach than having a separate construct/run stages.

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 15, 2025

parallel.runHost(function(parallelHost)
  local id = parallelHost.add(function() end)
  parallelHost.add(function() parallelHost.remove(id) end) -- Please no remove.
end)

@SquidDev Isn't this more or less like the original proposal, with the inclusion of a way to cancel coroutines?

@tizu69
Copy link
Contributor Author

tizu69 commented Aug 19, 2025

couldn't really think of any great solutions without overcomplicating this.

waitForFirst I can do if there's interest, otherwise, I'll mark this ready for review.

@tizu69 tizu69 marked this pull request as ready for review August 19, 2025 19:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area-CraftOS This affects CraftOS, or any other Lua portions of the mod. enhancement An extension of a feature or a new feature.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants