Replies: 6 comments 1 reply
-
|
Halfpipe is still a little half baked :)
The intent is to take some of the concepts from cli_scripts which allows
you to combine external processes and dart code into a single pipe line
using a builder style pattern.
The need for awaits does appear to make this a little uglier than I would
like:
Here is an example of the broad idea I'm trying to achieve:
```dart
final pipe = HalfPipe()
..run('ls')
// process the output of ls through a block of dart code
..process((stdin, stdout, stderr) async {
await for (final line in stdin) {
print('file: $line');
}
printerr('something went wrong');
})
// redirect any output to stderr back to stdout
..redirect(Pipeline.errToOut)
/// send each line to the 'rm' command - which won't work because of the
'file: prefix
..pipe('rm')
/// A second block of dart code
..process((_, stdout, __) async {
await for (final line in stdin) {
print('2nd block: $line');
}
});
```
The node child_process looks interesting but there are some ugly pieces:
```
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
```
If I've understood this correctly it's using a 'string' to match the
process exiting. This error prone style of processing I would have a
problem with.
Having said that I would be interested in a collaboration that finds the
best ideas from other languages but which implements them in a dart native
manner and is fully type safe.
…On Wed, Apr 17, 2024 at 12:12 PM Pascal Welsch ***@***.***> wrote:
Hey @bsutton <https://github.com/bsutton>,
You mentioned halfpipe a few times. Can you share a rough outline of what
it is about?
I plan to port nodes child_process API to dart. Is is similar?
—
Reply to this email directly, view it on GitHub
<#245>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG32OBXT24ER3MQ2GMPXJDY5XLBLAVCNFSM6AAAAABGKNA472VHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZWGUYTMOBRGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Interesting! The name is a perfect 👌
Hell no, I wouldn't port that part to Dart! Beauty of child_processWhat I like about I found only one major alternative to State of dcli and cli_scriptsGenerally, I'm quite happy with
PipingI usually follow “Simple things should be simple, complex things should be possible” - Alan Kay. My current workaround for this scenario is writing a shell script (plain text) and execut is as shell script ( For the halfpipe API to work, I suggest not to use the cascade operator. First experiementsI'm still experimenting but I also want to share a draft with you. My focus is on useful defaults.
final ProcessResult result = await exec(
'echo',
['hello'],
workingDirectory: userHome, // required, don't trust cwd
silent: true, // hides output from parent process
);
// stdout is automatically captured up to maxBuffer
expect(result.stdoutLines, ['hello\n']);
// When maxBuffer is reached, it emits a warning and forgets the oldest lines (fifo)final ChildProcess process = spawn(
'tail',
['-f', '-n 1', 'log.txt'],
workingDirectory: tempDir,
/*silent: false,*/ // default, also forwards pipes to parent process
);
process.stdout.listen((line) {
lines.add(utf8.decode(line));
});
process.stderr.listen((line) {
lines.add(utf8.decode(line));
});
// completes after stdout and stderr are closed
await process.kill();
// kill only fails when the process can't be killed |
Beta Was this translation helpful? Give feedback.
-
|
inline
On Thu, Apr 18, 2024 at 12:22 PM Pascal Welsch ***@***.***> wrote:
Interesting! The name is a perfect 👌
If I've understood this correctly it's using a 'string' to match the
process exiting. This error prone style of processing I would have a
problem with.
Hell no, I wouldn't port that part to Dart!
Beauty of child_process
What I like about child_process is the simplicity of exec (waits for exit
code) and spawn (allows communication with running process).
exec automatically buffers the output (up to maxBuffer) making it easy to
process the output after the process has finished.
The question in my mind is do we try to support an sync version and an
async version. Supporting sync does make the work more difficult but I do
so a path through.
Given that dcli just about has its sync actions working its probably better
to spend resources on an async version as that is definitely a problem
area.
I do have a problem with the buffer idea, more later in this thread.
I found only one major alternative to child_process on node (
https://www.npmjs.com/package/execa). It seems like most are satisfied
with the status quo.
State of dcli and cli_scripts
Generally, I'm quite happy with start and run by dcli. What could be
improved:
- Embrace the event loop! Make it a Future based API. (dropping ffi
for less breaking dependencies)
- Make it even easier to get to the output of processes. Progress
often feels clunky. This could also improve pipelines
I do agree that progress feels clunky and part of the reason for halfpipe
is to fix that and build an more flexible way of building pipelines.
cli_scripts has not been as intuitive for me.
It feels more cumbersome than using Process.start, and the benefits
haven't justified my time spent fighting the API.
I tried pushing it
<https://github.com/google/dart_cli_script/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aclosed+author%3Apasssy>
into a different direction but without success.
I'm now at the point of moving forward.
Have to agree on this one. The api just doesn't feel intuitive.
Piping
I usually follow *“Simple things should be simple, complex things should
be possible” - Alan Kay*.
I rarely piped stdout into stdin of another process. Therefore, a simple
piping API is only a nice-to-have for me.
This is something that I do do and would like to see the system support.
Chaining processes together, particularly if we can interleave with
dart code is a very powerful way of building execution pipelines.
My current workaround for this scenario is writing a shell script (plain
text) and execut is as shell script (writeAndRunShellScript
<https://github.com/phntmxyz/sidekick/blob/aead046c12ec6e478df35a1e190540361b313758/sidekick_core/lib/src/cli_util.dart#L46>
)
For the halfpipe API to work, I suggest not to use the cascade operator.
checks <https://pub.dev/packages/checks> moved away from it, because
users had a hard time using it.
Especially when it came to handling Futures.
I do agree with this. For pubspec_manager i moved away from the cascade
operator as its too limiting.
I think the classic builder pattern where each action returns an interface
with permitted actions is a better fit for the problem.
First experiements
I'm still experimenting but I also want to share a draft with you. My
focus is on useful defaults.
- always prints to stdout, unless silent: true
> agreed - but always under the users control. The cli_script tooling
would use the console to output errors in a non-controllable manner.
I take the view that the halfpipe can't output to the console unless the
user explicitly (or implicitly by some over-rideable default) requests the
library to write to the console.
This is particularly important if the user is doing cursor manipulation.
So a standard like 'always prints to stdout, unless silent: true' seems
fine.
- workingDirectory is required. cd can't screw up subsequent processes
> I'm strongly opposed to 'cd' which is why all dcli commands take a
working directory.
- capture happens automatically (up to maxBuffer). The entire output
can be accessed after the process has finished, without defining it
beforehand.
> I don't think we should be buffering unless the user explicitly requests
it. The problem is when when it max buffer suddenly the users app starts
failing.
My thought was that we are streaming everything.
If the user wants to capture output then we can provide a 'toList' operator
which then buffers the output into a list - optionally with a max buffer
size.
In this way the app will behave correctly with large data sets and if the
user uses 'toList' they are making an explicitly decisions that its
appropriate.
- exec throws on exitCode != 0 and *prints the captured stdout+stderr*.
(must be configurable)
generally agreed, but one of the aims is to allow the user to build
pipelines, in this case the stdout+stderr will be pushed to the next phase
in the pipeline rather than being printed.
- kill is async and waits for the process to exit and stdout and stderr
to be closed.
agreed, providing a way to way for stdout/stderr to flush is important.
final ProcessResult result = await exec(
'echo',
['hello'],
workingDirectory: userHome, // required, don't trust cwd
silent: true, // hides output from parent process
);// stdout is automatically captured up to maxBufferexpect(result.stdoutLines, ['hello\n']);// When maxBuffer is reached, it emits a warning and forgets the oldest lines (fifo)
final ChildProcess process = spawn(
'tail',
['-f', '-n 1', 'log.txt'],
workingDirectory: tempDir,
/*silent: false,*/ // default, also forwards pipes to parent process
);
process.stdout.listen((line) {
lines.add(utf8.decode(line));
});
process.stderr.listen((line) {
lines.add(utf8.decode(line));
});
// completes after stdout and stderr are closedawait process.kill();
// kill only fails when the process can't be killed
I would do this somewhat differently. My aim is to create a series of
orthogonal middleware that allows the user to create pipe lines.
So maybe something like this.
final pipeline = await Pipeline().command('echo', ['hellow'], workingdir:
xxx)
.cache() // takes a max buffer size.
.run(); // run is async
/// probably without the new line as makes cross platform easier.
expect(pipleline.stdout.firstline, 'hellow');
final pipeline = await Pipeline()
.command( 'tail', ['-f', '-n 1', 'log.txt'], workingDirectory: tempDir)
.run();
await pipeline.kill();
I don't understand your point about waiting for stdout/stderr to close. I
don't think tail ever closes either so kill would never complete.
So now lets build a pipe line.
int count = 0;
final pipeline = await Pipeline()
.command('ls', ['/'])
.redirectStderr // redirects stderr to stdout.
.pipeTo('echo')
.process((line) => print('$count: $line') // need to support async
processing here.
.pipeTo('head' ['-n', '1']
.cache
.run(); /// nothing starts until run is called - so just one async
operation allows chaining calls via the builder pattern.
expect(pipeline.stdout.firstline, '1: dev');
So by default we process dart as string lines, but we need a mode to
support binary processing of data.
final pipleline = await Pipeline()
. command('ls' ['*.png'])
.binary /// data is passed to the next process as stream of ints.
.pipeTo('resize', [100, 200])
.run();
.
—
… Reply to this email directly, view it on GitHub
<#245 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG32OA6JZ753TZ2ERLTYHLY54U7DAVCNFSM6AAAAABGKNA472VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TCNBZGU3DM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Here is the repo that I'm playing in. HalfPipe2 is my current focus around the api design. |
Beta Was this translation helpful? Give feedback.
-
|
I've done some playing with the api form and I think this is the style I'm aiming for: final bigWav = File('big.wav');
/// merge the list of wav's in the current
/// directory into a single wav
/// by concatentating them together
await HalfPipe2()
.command('ls *.wav')) // run the 'ls' command to get a list of files.
.transform(Transform.line) // Convert the output to lines.
// Run dart code outputing <int> data.
.block<int>((srcIn, srcErr, sinkOut, sinkErr) {
/// listen for the list of filenames
srcIn.listen((wav) {
// open each file and write the content
// into sinkOut as <int> data
final ras = File(wav).open();
sinkOut.addStream(ras.stream);
ras.close();
}
// just pass any errors to the next section in
// the pipeline
sinkErr.addStream(srcErr);
})
.write(bigWav) // save all the data into bigWav
.run(); // start the pipeline.
Still lots of questions about error handleing and even if the above form is possible (I have code for a chunk of it). |
Beta Was this translation helpful? Give feedback.
-
|
Oops, try now.
The code compiles and I'm working to get some unit test together in half
pipe_test.dart.
The first one almost works except for the call to Transform.line.
They is no error handling in the engine and at this point I'm not sure of
the correct path.
Unlike bash I'm passing stderr between pipe sections in its own channel -
unsure if this is a good idea.
pipe_phase is where the core engine lives.
There are four main sections types you can add to a pipe line.
Command - run external app
Processor - call dart code that is derived from the Processor class -
currently you can't change the type but this may be a mistake, aim is to
provide a set of standard helper classes like Tee.
Block - call dart code in a callback and potentially change the type.
Transform - call a Convertor, looking to provide a set of standard
converters as well as use existing dart ones.
You then have terminal functions such as run and toList which cause the
pipeline to run.
…On Sun, 21 Apr 2024, 10:27 pm Pascal Welsch, ***@***.***> wrote:
private 😉
—
Reply to this email directly, view it on GitHub
<#245 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG32OGEZQTFORBQPOT7U2TY6OWE3AVCNFSM6AAAAABGKNA472VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TCNZYHE3DC>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey @bsutton,
You mentioned halfpipe a few times. Can you share a rough outline of what it is about?
I plan to port nodes
child_processAPI to dart. Is is similar?Beta Was this translation helpful? Give feedback.
All reactions