Lua is a tiny language that can do a lot. As a scripting language, it can be embedded into larger projects, used in IoT devices for its lightweight and flexible nature. It can support both functional and object-oriented approaches, but the amount of support it can supply to each is limited. Lua has no built-in support for advanced file manipulation, directory traversal or process creation and management.
Apolo
is an extension for lua that adds these fundamental bash shell capabilites. Before the beginning of this summer-long project, the package already supported a basic “run process” command. Our goal for this summer was to add advanced functionality to this command.
suspend()
, wait()
, terminate()
, kill()
, status()
and exit_code()
.Eval is a classic bash shell command that takes the output stream of the run process and redirects it as the return value. This makes running processes useful for accessor programs like ls
and dir
. Without this, run processes used in any formal environment would be limited to processes that mutate the surrounding environment without returning any necessary values.
Eval implements a pipe to connect the regular output stream to a string buffer (with a limit of 1024 characters).
Evaluation is done using apolo.eval(command)
command
: string or tableapolo.eval runs the process signified by the command
variable. If command
is a string, it will be parsed and executed. Otherwise, if it’s a table, the first element of command
will be the executable and all other elements will be the parameters:
require 'apolo':as_global()
-- equivalent commands
local file_contents = eval 'ls -la "foo bar"'
local file_contents = eval{'ls', '-la', 'foo bar'}
Eval returns the console output as a string when it’s successful. Otherwise, it returns nil
followed by the error string. This way, the user can wrap any run call with assert
:
local file_contents = assert(en_eval 'lls -la "foo bar"') -- Error: Command not found
Piping is a technique available to most operating systems that allow direct connections between the output of one process and the input of the other. This direct connection between processes allows them to operate simultaneously and reduces the number of intermediate variables needed to send information from one job to the next. Piping is an invaluable tool for scripting languages, so it is natural that a version of piping be implemented in lua.
The .pipe
modifier turns apolo.run
into a multivariate function, allowing up to 32 processes:
run.pipe('ls -l', 'grep .txt', 'sort')
Similarly, the commands can be tables of arguments:
run.pipe({'ls', '-l'}, 'grep .txt', {'sort'})
local var = eval.pipe({'ls', '-l'}, 'grep .txt', {'sort'})
A piped function will return in the same way the regular function would. run.pipe
will return true, false or nil based on how the process completed, and eval will return the string output of function or nil if the process failed.
apolo.run
sends the process’s output to the output stream, pushes errors to the error stream and reads its input from the input stream. This is less than ideal, as the input stream requires user input on a keyboard which would be inconvenient for IoT and automation programs. This set of modifiers allows the user to redirect all of the above stream to files or to other streams.
.out_to
and append_out_to will write the process’ output to a file.err_to
and append_err_to will write the process’ errors to a file.from will
change the process’ input to read from a file.out_to_err
will redirect the output to the error stream.err_to_out
will redirect errors to the output streamFor now, these modifiers only work on pipes in a limited fashion. In linux, a process can be redirected so that any errors are put in its output stream and then piped to the next process, which can have a different set of modifiers and redirections. As of August 2019, the I/O redirection implementation only affects the first and last processes in a pipe. .from
only works for the first process in a pipe and .out_to
, .err_to
, .out_to_err
and .err_to_out
only effect the last process in a pipe.
Executing the command as run.to
will write the output to a file instead of to stdout. If the file already exists, its contents will be overwritten:
run.to("dir_files.txt")('ls -l')
Executing the command as run.append_to
will append the output to a file instead of to stdout. If the file doesn’t exist, it will be created, and if it does exist the output will be appended to the end of the file. If both .to
and .append_to
are used, the program will default to using .to
and overwrite the file.
Executing the command as run.from
will get the input from a file instead of from stdin:
run.from("programming-in-lua.txt")('grep "function"')
Executing the command as run.err_to
will write the error stream (stderr) to a file. If the file already exists, its contents will be overwritten, just like .to
.
Executing the command as run.append_err_to
will append the error stream (stderr) to a file. It works like .append_to
.
Executing the command as run.err_to_out
will append the error stream (stderr) of all processes to the output stream (stdout). Modifiers like .err_to
will not take the error stream into account anymore, while modifiers like .out_to
will:
run.err_to_out("lua purposefully_broken_program.lua")
run.err_to_out.out_to("err_log.txt")("lua purposefully_broken_program.lua")
Be careful when using this modifier on a pipe, as multiple piped processes may write their errors to out at the same time:
run.err_to_out.pipe("lua purposefully_broken_program.lua", "lua other_broken_program.lua")
Executing the command as run.out_to_err
will append the output stream (stdout) of the process (or the last process in the pipe, if there is one) to the error stream (stderr). Similar to err_to_out
, this modifier does not work with out_to
but does work with err_to
.
Among the most important requirements for embedded systems and task automation is the ability to run programs in the background and manage them in real-time. Before this summer, a run.bg
modifier did exist, but only the linux implementation worked and there was no way to manage processes during their runtime.
This summer, I added a basic metatable that is returned by all calls to run.bg
. This table comes with five functions that can be used to manage that process:
proc:wait()
: pushes the program to the foreground and waits until the process ends. Returns the exit codeproc:suspend()
: suspends the processproc:resume()
: resumes the processproc:kill()
: kills the process similar to the SIGKILL signal on unix systemsproc:terminate()
: terminates the process similar to the SIGTERM signal on unix systemsproc:status()
: returns running
, suspended
, failed
or finished
proc:exit_code()
: returns the process’ exit code if it has finished. If it errored out, was interrupted by terminate() or kill(), or is still running, the command returns nil
.Processes can also be accessed in bulk using the jobs
command, which filters all processes to return ones with a given status.
All run.bg
commands return either their resulting process or nil
:
local my_job = run.bg("lua long_program.lua")
print(my_job:status()) -- Returns "running"
You can also get all running background processes using the job
function:
local proc_table = jobs("running")
-- End all background processes
for _, proc in pairs(proc_table) do
print(proc:terminate()) -- Prints true on success
end
NOTE: Apolo
only keeps track of background processes started by the running program. You cannot manage background processes started by other programs or system processes.
If you’re on windows, you will need to install mingw to compile the package.
make linux
on linux or mingw32-make mingw
or windowslib
folder and copy the apolo.lua
file and the newly made apolocore.so (or dll on windows) into your lua bin (or any file on the lua path)require 'apolo'