Nushell
The very first thing that most people using Linux get used to is the shell. Most package managers are used primarily in the shell, and pretty much anything can be done from there. And with the Unix philosophy of “write programs that do one thing and do it well”1 I think that the design and usage of the shells like Bash or Zsh make plenty of sense.
However, this usually complicates a lot of things in the shell. When a user first decides they want to script something, it seems benign and easy to do so, just put what you write in the terminal into a script file! Obviously that should be fine, and as long as you stay within a single tool, it might actually work. As soon as you have to start making decisions on what to do, parsing output and creating input, then things get tricky. If everyone followed the Unix philosophy it would probably be pretty straight-forward, but very few actually do. All of a sudden you have to ask yourself questions that you were trying to avoid by not writing your script in Perl, Python or Ruby.
“How do I…?”
- Know that the program properly executed?
- Know what kind of output I should expect?
The first one should have an easy answer: an exit code of 0 means it worked. If the application is written properly, then you’re all set, move onto the next problem. But once you go outside the core utils, this assumption can get you into trouble. Maybe some application exits with a 0, but the error is printed to the screen because it expects a user to be seeing the output, or it exits with different codes than the normal errno(3) constants. I’m guilty of this myself, since I generally just use 0 and 1, and then anything besides that is only there for debugging, but this can be an issue.
The third part of the Unix Philosophy is
Write programs to handle text streams, because that is a universal interface.
On it’s face this sounds nice, until you realize that text streams could be…anything. It could
- be a tab separated values list
- look like it’s separated with tabs, but actually use spaces
- be a JSON object
- be a series of JSON objects
- have color or terminal escapes sequences
- be completely unstructured
For all of these you need to treat things very differently, always making life more difficult for anyone involved. And this format is rarely seen as something that should be versioned, so when outputs change, anything that relies on them must change. This is especially common for command line programs that usually have user input or just print to a terminal, as that’s not seen as something the developers need to pay attention to.
And native string-handling in any shell is an absolute pain. Splitting a string using IFS=... is super prone to mistakes, and using the shell expansions is difficult when you want to make it portable. To make it POSIX compliant and parse strings at the same time is a Herculean task that makes me question what a word really is every time. Quoting, un-quoting, remembering if the variable was quoted or if it’s an array, hoping that wherever you first declared it had it quoted…
![]()
Enter the Alternative Shells
I first used a real alternative shell when I tried Powershell Core (or 6), which solved a lot of these problems by using structured data. Programs had a specific set of outputs, which could still be text, but any actual Powershell application would use a Powershell Object. Need to clear a directory? Use Get-ChildItem and pass that output into Remove-Item. It really felt like a lot of pipelines and data passing, which when you first use it can be very jarring. It feels so overly verbose at times, but there’s a real reason behind it.
Powershell is great, but it’s very obviously just C# in a trench coat with lots of holes in it. Don’t get me wrong, Dotnet and C# are super powerful, but the don’t really feel very Unix-y at the end of the day. Then I stumbled upon Nushell, and I thought “oh, wow, seen this before, next.” And then I started using it, and it felt so much more like a real shell, rather than a programming language being adapted for the shell. The commands were short, reminiscent of the normal coreutils commands. It didn’t feel overly verbose to write, either, and the configuration felt much more linux/unix-y compared to Powershell.
Sticking with Nu
In most shells I feel like scripts are there just to let the user do a thing without typing everything out. Aliases, functions, scripts and more all feel like they’re the terminal but in a (hopefully small) text file, and because bash or at least some POSIX-compliant-ish sh is present on almost every system now, it feels super portable, small and light. Until you actually look at what the script requires. Parsing json? You’ll need jq. Doing any file operations? You should have it, but you’ll need the right coreutils. Doing math? You can do that in the shell but normally you want bc. Replacing strings? Pick your poison: sed, grep or awk2. Doing colors? Pull out the ncurses or get ready to manually define all of your colors. Multi-line strings? Yeah, they work, but everyone just does multiple echo statements anyway.
With Nushell I feel like it’s truly a “batteries included shell,” not just a language that can do all the stuff I mentioned above. It doesn’t feel cumbersome to type to json when I want to convert it to JSON, compared to ConvertTo-Json, it’s almost as nice as just jq. And I don’t need to learn a whole new language to handle table/record/list objects in jq because Nushell can already do that.
But that’s only the first layer. Once you get over how nice and shiny it is, you realize that some of the stuff you took for granted doesn’t really work the same, and then suddenly it all clicks:
It’s not a shell, it actually is a language that just happens to be really, really nice in the shell.
Variable scoping suddenly makes sense, the separate $env table can have conversions for non-nushell programs, and data suddenly just…makes sense. Reading the Nushell Philosophy is interesting, because it seems like it shouldn’t work in a shell, but the way they do actual data handling makes it work. Communication can still be done with strings, but for internal-to-internal commands, it can be structured for the entire pipeline. The way it handles state, variables and mutability makes writing a script feel more like writing Rust than writing for a shell.
And the scripts feel first-class, not just like a file version of the shell. With the [lsp][lsp] it can find missing arguments, type errors and more before you even try to run the script. Writing nice terminal UI’s feels easy and natural, since they’re just function arguments. Strings can be handled in the program without jq, or awk, and the results can be passed around as real objects. No more ${foo[@]} breaking something because you forgot to quote it.
The only thing that is missing from some applications is that they expect a sh-like shell, so they don’t output JSON, just export statements (I’m looking at you, 1Password CLI). But anything that does is so easy to work with it almost feels wrong.
If you have the chance, I would highly recommend trying out Nu. It’s definitely not as stable or robust as it still isn’t 1.0 yet, but that will hopefully happen soon!
Yes, I know it’s only part of the unix philosophy, and it’s from a revision, but it’s still the main one people cite all the time.
if you do this, JUST USE AWK. It can do all the things sed and grep can, stop doing grep ... | awk ..., just use awk on its own.