# Linux CLI for Data Science 2020

The 2020 episode at Faculty of Mathematics, Physics and Informatics of Comenius University.

Lectures (B)
Tuesday, 11:00 - 13:40
Labs (H-6)
Monday, 16:30 - 18:00 (voluntary)

## Goal

The goal of this lab is to

• show you the cool things (your) computers are capable of
• get you acquainted with UNIX-like operating systems, the tradition which powers much of modern computing
• be a fun break from other classes

What you are studying is non-trivial already. It is not our job to punish you for choosing to do that but to give you some practical skills that will let you apply it straight away.

## Lab Lectures

### Lecture 1: Intro to Command Line

Discussed material:
• History of UNIX-like operating systems
• Text console, Shell and Secure Shell (SSH)
• Shell Commands (short intro)
• ... and more in the first set of slides
Supplementary resources:
• The TTY demystified: so what exactly is this teletype that has been mentioned a few times? This article starts with a caveat that it is not particularly elegant, but once you read through it, you'll get a much more thorough understanding of (modern) UNIX-like system and the UNIX history as well.
• The History of Unix by Rob Pike: it is not every day that you get an important piece of (computing) history described by someone who helped with making it. Well worth the watch!

### Lecture 2: Files and Directories

Discussed material:
• UNIX-style file system
• Directory tree and its important parts
• Navigating the filesystem
• Complete and autocomplete in BASH
• ... and more in the second set of slides
Supplementary resources:
• How dotfiles came to be: A short story (by Rob Pike once again) about how dotfiles (you know, the hidden files that start with a dot) came to be and what it says about the unintended effects of cutting corners and just "hacking around" a problem.
• The history of the /usr split: Different story but a very similar morale. Read through it to find out how did the /bin vs. /usr/bin split happen, how irrelevant it is these days and how one needs to fight against the bad ideas in order not to let them propagate.
• Linux Filesystem Hierarchy: A deeper discussion on the various parts of the standard Linux filesystem, describing the various directories in much higher detail than the slides ever could.

### Lecture 3: Standard I/O, Pipes and Text Processing

Discussed material:
• Standard Input/Output
• Pipes
• Introduction to Text Processing
• ... and more in the third set of slides
Supplementary resources:

### Lecture 4: Processes and Signals

Discussed material:
Supplementary resources:
• An introduction to UNIX processes: This piece gives you "yet another" rundown of what are the UNIX processes about. What's interesting about it is the part about fork and exec we've just quickly gone over in the lecture. I would very much recommend taking a look at it.
• Two great signals: SIGSTOP and SIGCONT: What do you do when you've got a long-running script that you cannot afford to (or just don't want to) stop but would very much like to at least pause? This article will tell you a bit about that.
• Should you be scared of Unix signals?: A short attempt at making the Unix signals look a bit less scary. It's a bit technical but if you'd like to go a bit deeper, still very worth reading.

### Lecture 5: Users, Groups and Regular Expressions

Discussed material:
Supplementary resources:
• Ken Thompson's Unix password: A story on how the password of one of the old-timers was cracked nearly 40 years later and why "shadowing" is generally not a bad idea.
• The origins of grep: Brian Kernighan, one of the forefathers of UNIX discusses how grep came to be, and it makes for a rather interesting story!
• When it comes to regular expressions, it helps a lot to visualize what they match and how. There are two tools we recommend in this regard:
1. Regex101 which is basically an integrated development environment (IDE) for regular expressions
2. Regexper which nicely visualizes regular expressions as "proto programs". Here is a sample visualization.
• If you'd like to play with regular expressions a bit, there is RegexGolf, Regex Tuesday or RegexCrossword. We recommend them all!

### Lecture 6: Vim

Discussed material:
• Vim's philosophy
• NORMAL, INSERT, VISUAL and COMMAND modes
• Editing text in Vim using regular expressions and Unix commands
• ... and more in the sixth set of slides
Supplementary resources:

### Lecture 7: File and directory attributes

Discussed material:
• The concept of inode
• File metadata (permissions, timestamps, owner and group)
• ... and more in the seventh set of slides
Supplementary resources:
• Symlinks, Hardlinks, Reflinks and ML projects: This article goes deeper into how these concepts of links can be used for various Machine Learning (ML) projects where you work with a ton of data.
• Symlinks in Windows 10: Yes, they are such a good idea that even Windows (at least in the currently latest version) has them now. The reason why is interesting: in the current environment the generally used development tools basically require them.
• unix-permissions: Swiss Army knife for Unix permissions: A simple utility that allows you to (programmatically) convert between various ways of describing permissions of UNIX files.

### Lecture 8: find and xargs

Discussed material:
• What would looking up stuff on filesystem entail (grep)
• find
• xargs
• ... and more in the eight set of slides
Supplementary resources:
• The history of find: Despite what we make it to be, find does have a bit of a negative connotation to it. Mostly because it does not embody UNIX philosophy to the extend other tools do. Check out this link for a fun story that provides a bit of a backstory (and one heck of a punchline!).
• Why doesn't grep work: A short article on the difference between the Basic and Extended Regular Expressions (and how that relates to what's the situation like in "real" programming languages).
• Things you don't know about xargs: Some more advanced capabilities of xargs described in a friendly way with more than a few examples.

### Lecture 9: sed and awk

Discussed material:
Supplementary resources:
• A conundrum for a sed wizard: Real life story of what sort of craziness people solve with sed (and a reminder that not being able to figure something out is more than OK).
• Removing duplicate lines from files preserving their order: Despite what it sounds like, the task is actually not that simple/straightforward and yet with awk you can pull it off with a simple oneliner. [1] I especially recommend this article for its second part where the author explains what is actually happening when that oneliner gets executed and why the alternatives would not work.
• Expense Calculator in awk: One of the most beautiful examples of what awk is capable of. The best part: you already know enough to read through the awk code yourself!

### Lecture 10: Introduction to Bash scripting

Discussed material:
Supplementary resources:
• Learn X in Y minutes: bash: After a while you'll find out that all programming languages look alike (provided you are familiar with their basic building blocks). The Learn X in Y minutes site will allow you to get up to speed on basically any programming language there is -- Bash included. Do check the link out -- there is a no-trivial chance you'll learn something new.
• Advanced Bash-Scripting Guide: What we went through at the lecture was just a brief introduction at best. If you'd like to got a bit deeper, there is probably no better resource than the Advanced Bash-Scripting Guide linked above. Its seconds part (titled Basics) goes over similar material as the lecture did while the other three parts show how much more one can do with Bash (turns out, it's quite a lot).
• Turning gzip archive into a database: This article tells a nice story on how knowledge of the internals of the gzip format (basically "how it works inside") can allow one to turn these archives into databases. Not that practically useful but still quite interesting (and educative).

### Lecture 11: Git

Discussed material:
• Version Controlling
• Git fundamentals (repository, staging area, commit, branch)
• Introduction to Git remotes (cloning, pushing/pulling)
• ... and more in the eleventh set of slides
Supplementary resources:
• A Grip On Git: Since the lecture used slides, it was a bit boring and it was not difficult to lose the conceptual train of thought. The tutorial on this link could be of help in that case: it walks you through similar content, while providing visual representation of what is going on in the repository while you execute various git commands. Worth checking out, even if just for the artistic experience.
• Visual Git Reference: We've only gone through a handful of commands and a few straightforward use cases in the lecture. This visual reference covers most of the commands you will encounter if you end up using Git. If you like pretty pictures, I certainly recommend you check it out.
• fh: file history: It turns out the diff command we talked about some time ago is actually capable of printing out not just the difference between two files in a "human readable" format, but also as ed commands (the single-line predecessor of Vi and Vim). What this allows one to do is to put together a simple "version control system" using just the commands we already discuss in the class, that is ed, diff, awk, sed, and sh. (Yes, this is considered nerdy, even for people who already control their computers from the command line...)

### Lecture 12: csvkit and jq

Discussed material:
Supplementary resources:
• So You Want To Write Your Own CSV code?: A passive-aggressive discussion on what parsing CSV actually entails. If you never did it yourself, take a look and I am pretty sure you almost certainly never will.
• Illustrated jq tutorial: If you liked the few jq examples we've shown, check out this tutorial as well. I goes very nicely over more than a few examples which are rather close to the real life. Note that you can click on any "piped" command and see the interim results (i.e. what it looks like after that part of the pipe gets evaluated).
• Console Spreadsheets: You already learned that people tend to be crazy when using the command line. This links shows that Spreadsheets are no exception -- it can help you grasp what the world looked like before Microsoft Excel and Google Spreadsheet became the status quo.
• Other CSV processing CLI tools
• xsv: A Rust implementation of a suite very similar to csvkit. A bit quirky but really fast.
• miller: An alternative to csvkit which tries to be more multi-purpose (its tag line says "Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON").
• textql: It often times happens that you wish you could just run an SQL query against your .csv file. That's exactly what textql allows you to do. If you'd like something that's even more versatile, check out q.

### Lecture 13: Modern Unix Tools

Discussed material:
• tmux
• new shells (fish, xonsh and nushell)
• grep alternatives (especially ripgrep)
• other miscellaneous tools (replacements for ls, cat, df, ps and so on)
• ... and more in the thirteenth set of slides
Supplementary resources:
• tmux - a very simple beginner's guide: An extremely oversimplified guide to tmux which you can go through in about 5 minutes. We strongly encourage you to try it out -- feel free to use davos where tmux ought to be set up already as we've used it throughout the whole semester. For a list of things you can do with tmux, feel free to check out the tmux cheat sheet
• Modern Alternatives of Command-Line Tools: This article has inspired much of the content discussed in the slides. It is accompanied with graphical demos of various commands. Worth taking a look!
• Become shell literate: Our final attempt at trying to persuade you that this whole thing made sense. Full disclosure: the author of the article is a well-known free software advocate, so he is far from impartial in his article. That said, he is certainly not alone in suggesting it; here is another example from Letters To A New Developer

## Courses to consider next

There are quite a few courses at Matfyz which build upon the foundation we've tried to build in this course.

Here are the ones we think would be most appropriate:

## Resources

### Slides

LISA conference (part of USENIX, the old UNIX organization) has had a workshop called Linux Productivity Tools. It's basically "zero to hero" in 89 slides. It's very worth checking out, especially if you are in a hurry.

Linux Productivity Tools slides

### Historical Books

If you like books, here are two worth reading:

UNIX: A History and a Memoir by Brian W Kernighan

A historical account of how UNIX came to be by someone who was there when it happened. It will help you paint the proper picture of what is meant when people say stuff like "UNIX legacy" or "the UNIX era".

Strangely enough, this is a novel; a true story of a physicist who tracked one of the first documented "hackers" (cracker would really be a better term here, but I digress) who he found snooping around his systems. The best part is that it's all real, down to the (obviously UNIX) commands that were used. Well worth a read!

 Assignments: 50% Exam: 50%

There will be one assignment per week. Each of them is (normally) worth 5% (plus some bonuses). You have up to a week to finish them, but most people manage to do it during the lab.

Exam will be conducted from the content discussed at the Tuesday lectures

 [1] To be fair, that oneliner is a bit "golfed" (i.e. not that straightforward to read and interpret). Here is another, hopefully clearer, version: awk '! visited[$0] { print$0; visited[\$0] = 1  }'