Whatever language a project uses, it often relies on shell scripting for repetitive tasks. Minimum script would be to launch your build tool (mvn package) but in current ecosystem, it is not rare to use docker, minikube or other tools to develop faster. Let see how to structure these script tools to make it maintenable and easier to work with.

Example

In a microservice project you often have several tasks:

  1. build your project

  2. bundle your project as a docker image

  3. setup a dev/demo environment (minikube/microk8s for example)

  4. deploy your project dependencies (another microservice for example)

  5. load some external images to minikube registry

  6. deploy your microservice

  7. do it all at once (or group several of these tasks in a single command/shortcut)

You can write all these tasks in your preferred language - NodeJS, Java, Go, …​ - but at some point you will end up just doing some exec to delegate to minikube, docker or other programs. For that reason, having some shell scripts is generally very relevant for that part of your automotion - I still recommend to write the documentation and other tooling code in your preferred language rather than advanced scripts.

So the question is: how to structure my scripts to ease the usage on an everyday basis since I’m (generally) not a script expert.

Folder layout

The structure I tend to use is the following one:

.
├── commands (1)
│   ├── command1 (2)
│   │   ├── _cli (3)
│   │   └── index.sh (4)
│   └── command2
│       ├── _cli
│       └── index.sh
└── main.sh (5)
1 A folder which will contain all commands,
2 A particular command of the CLI,
3 A CLI dedicated folder with metadata (see help section),
4 The command entry point,
5 The CLI entry point, ie the script you will always invoke.

A command is not something complicated, it i really just an index.sh doing what you need.

this is optional but you can add an shared/ or util/ folder next to commands containing utilities the commands can import/source to not rewrite the same code again and again (useful for logging, working with images the same way in all scripts if you have multiple images to pull/build/push/cache, etc..).

The main script is responsible to detect the command to launch (first argument/$1), check the commands/<name>/index.sh file exists and launch it forwarding the scripts arguments:

main.sh
#! /usr/bin/env bash

#
# <script> help for details
#

base="$(dirname $0)"

main() {
  if [ "$#" -lt 1 ]; then (2)
    echo "[ERROR] No command set"
    exit 5
  fi

  sub_command="$base/commands/$1/index.sh"
  if [ -f "$sub_command" ]; then (3)
    shift (4)

    (5)
    MAIN_DIR="$base" SCRIPT_BASE="$base/commands/$1" /usr/bin/env bash "$sub_command" "$@"
  else
    (6)
    echo "[ERROR] Unknown command $*, ensure $sub_command exists or fix its name"
    exit 6
  fi
}

main "$@"(1)
1 We call our entrypoint,
2 We check we have at least one command to execute or we fail,
3 We check the requested command exists,
4 If the command exists we drop its name from the arguments (to enable to pass only parameters to the command itself),
5 We call the command forwarding parameters from the terminal,
6 If the command does not exist we print an error.

At that stage you have everything to handle your scripts but we can go a bit further thanks this structure.

Writing help command

When you are alone on a project you can stop there but it is rarely the case and automating is also for sharing with your project coworkers. For that reason, it is important to ease the usage of the scripts. A minimum requirement is to associate to all these scripts a help command enabling to get started with this CLI - like any CLI actually.

You probably already guessed you can do it very easily creating a commands/help/index.sh and echo-ing all the help there. This solution works very well but has a drawback: it requires you to maintain all commands help in the help command. In other words, if you add/remove/update a command you must also not forget to update help command.

To simplify that command, we’ll use the _cli folder of each command. Each _cli folder will contain a help.txt file the help command will use to document the related command. This way the help/index.sh script will be responsible to:

  1. print some general help (global usage etc),

  2. loop over all commands and use its help.txt as description.

Here is a script doing that:

commands/help/index.sh
cd "$SCRIPT_BASE"
echo -e "Commands:\n\n" (1)
for i in $(ls -d * | sort); do (2)
  echo -e "  - $i" (3)

  (4)
  [ -f "$i/_cli/help.txt" ] && cat "$i/_cli/help.txt" | sed 's/^/    /g'

  echo (5)
done
cd - &> /dev/null
1 Global help (optionally you can use an external file in commands/help/_cli/global.help.txt for example to contain this text if more verbose),
2 Loop over all commands (sorted by name),
3 Print the command name as a list,
4 If it exists, print its description (help.txt) indented,
5 Print an empty line to make it sexier.

And here it is, if you want to add a command, you create a folder in commands, an index.sh and _cli/help.txt and you are done. If you want to remove a command you just drop its folder.

Conclusion

With these two simple base scripts, you can really simplify the maintenance of your automotion.

you can use the same structure trick for a multi-level script. For example you can have commands/<team>/<command_name>/index.sh structure (adding a team level) which would enable to group command per responsability. A command usage of that is ./main.sh devops deploy --environment staging vs ./main.sh dev build. While not required it can enable to share scripts and have a strong ownership of the scripts by design but this part really depends how you work in your company, important point is you have all the basis in this post to have a maintenable CLI in your project ;).

From the same author:

In the same category: