What command can be used to find FILEs containing words that start with the characters work?

What command can be used to find FILEs containing words that start with the characters work?

What command can be used to find FILEs containing words that start with the characters work?
What command can be used to find FILEs containing words that start with the characters work?

Get the answer to your homework problem.

Try Numerade free for 7 days

What command can be used to find FILEs containing words that start with the characters work?

Indiana Institute of Technology

We don’t have your requested question, but here is a suggested video that might help.

_____ is used to find a particular word from a file.

You can use grep tool to search recursively the current folder, like:

grep -r "class foo" .

Note: -r - Recursively search subdirectories.

You can also use globbing syntax to search within specific files such as:

grep "class foo" **/*.c

Note: By using globbing option (**), it scans all the files recursively with specific extension or pattern. To enable this syntax, run: shopt -s globstar. You may also use **/*.* for all files (excluding hidden and without extension) or any other pattern.

If you've the error that your argument is too long, consider narrowing down your search, or use find syntax instead such as:

find . -name "*.php" -execdir grep -nH --color=auto foo {} ';'

Alternatively, use ripgrep.

ripgrep

If you're working on larger projects or big files, you should use ripgrep instead, like:

rg "class foo" .

Checkout the docs, installation steps or source code on the GitHub project page.

It's much quicker than any other tool like GNU/BSD grep, ucg, ag, sift, ack, pt or similar, since it is built on top of Rust's regex engine which uses finite automata, SIMD and aggressive literal optimizations to make searching very fast.

It supports ignore patterns specified in .gitignore files, so a single file path can be matched against multiple glob patterns simultaneously.

You can use common parameters such as:

  • -i - Insensitive searching.
  • -I - Ignore the binary files.
  • -w - Search for the whole words (in the opposite of partial word matching).
  • -n - Show the line of your match.
  • -C/--context (e.g. -C5) - Increases context, so you see the surrounding code.
  • --color=auto - Mark up the matching text.
  • -H - Displays filename where the text is found.
  • -c - Displays count of matching lines. Can be combined with -H.


Page 2

I'm trying to find a way to scan my entire Linux system for all files containing a specific string of text. ... Is this close to the proper way to do it? If not, how should I? ... This ability to find text strings in files would be extraordinarily useful for some programming projects I'm doing.

While you should never replace (or alias) a system command with a different program, due to risk of mysterious breakage of scripts or other utilities, if you are running a text search manually or from your own scripts or programs you should consider the fastest suitable program when searching a large number of files a number of times. Ten minutes to half an hour time spent installing and familiarizing yourself with a better utility can be recovered after a few uses for the use-case you described.

A webpage offering a "Feature comparison of ack, ag, git-grep, GNU grep and ripgrep" can assist you to decide which program offers the features you need.

  • Andrew Gallant's Blog claims: "ripgrep is faster than {grep, ag, git grep, ucg, pt, sift}" (a claim shared by some of the others, this is why a feature comparison is helpful). Of particular interest is his section on regex implementations and pitfalls.

    The following command searches all files, including hidden and executable:

    $ rg -uuu foobar

  • The Silver Searcher (ag) claims it is 5-10x faster than Ack. This program is suggested in some other answers. The GitHub doesn't appear as recent as ripgrep's and there are noticably more commits and branches with fewer releases, it's hard to draw an absolute claim based on those stats. The short version: ripgrep is faster, but there's a tiny learning curve to not get caught by the differences.

  • So what could be next, you guessed it, the platinum searcher. The claims are: it searches code about 3–5× faster than ack, but its speed is equal to the silver searcher. It's written in GoLang and searches UTF-8, EUC-JP and Shift_JIS files; if that's of greater interest. The GitHub is neither particularly recent or active. GoLang itself has a fast and robust regex, but the platinum searcher would be better recommended if it had a better user interest.

For a combination of speed and power indexed query languages such as ElasticSearch or Solr can be a long term investment that pays off, but not if you want a quick and simple replacement for grep. OTOH both have an API which can be called from any program you write, adding powerful searches to your program.

While it's possible to spawn an external program, execute a search, intercept its output and process it, calling an API is the way to go for power and performance.

This question was protected Aug 6 '15 at 19:34 with this caution:
  We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations.

While some answers suggest alternative ways to accomplish a search they don't explain why other than it's "free", "faster", "more sophisticated", "tons of features", etc. Don't try to sell it, just tell us "why your answer is right". I've attempted to teach how to choose what's best for the user, and why. This is why I offer yet another answer, when there are already so many. Otherwise I'd agree that there are already quite a few answers; I hope I've brought a lot new to the table.

Teaching: 25 min
Exercises: 20 min

Questions

  • How can I find files?

  • How can I find things in files?

Objectives

  • Use grep to select lines from text files that match simple patterns.

  • Use find to find files and directories whose names match simple patterns.

  • Use the output of one command as the command-line argument(s) to another command.

  • Explain what is meant by ‘text’ and ‘binary’ files, and why many common tools don’t handle the latter well.

In the same way that many of us now use ‘Google’ as a verb meaning ‘to find’, Unix programmers often use the word ‘grep’. ‘grep’ is a contraction of ‘global/regular expression/print’, a common sequence of operations in early Unix text editors. It is also the name of a very useful command-line program.

grep finds and prints lines in files that match a pattern. For our examples, we will use a file that contains three haiku taken from a 1998 competition in Salon magazine. For this set of examples, we’re going to be working in the writing subdirectory:

$ cd $ cd Desktop/shell-lesson-data/exercise-data/writing $ cat haiku.txt

The Tao that is seen Is not the true Tao, until You bring fresh toner. With searching comes loss and the presence of absence: "My Thesis" not found. Yesterday it worked Today it is not working Software is like that.

We haven’t linked to the original haiku because they don’t appear to be on Salon’s site any longer. As Jeff Rothenberg said, ‘Digital information lasts forever — or five years, whichever comes first.’ Luckily, popular content often has backups.

Let’s find lines that contain the word ‘not’:

Is not the true Tao, until "My Thesis" not found Today it is not working

Here, not is the pattern we’re searching for. The grep command searches through the file, looking for matches to the pattern specified. To use it type grep, then the pattern we’re searching for and finally the name of the file (or files) we’re searching in.

The output is the three lines in the file that contain the letters ‘not’.

By default, grep searches for a pattern in a case-sensitive way. In addition, the search pattern we have selected does not have to form a complete word, as we will see in the next example.

Let’s search for the pattern: ‘The’.

The Tao that is seen "My Thesis" not found.

This time, two lines that include the letters ‘The’ are outputted, one of which contained our search pattern within a larger word, ‘Thesis’.

To restrict matches to lines containing the word ‘The’ on its own, we can give grep with the -w option. This will limit matches to word boundaries.

Later in this lesson, we will also see how we can change the search behavior of grep with respect to its case sensitivity.

Note that a ‘word boundary’ includes the start and end of a line, so not just letters surrounded by spaces. Sometimes we don’t want to search for a single word, but a phrase. This is also easy to do with grep by putting the phrase in quotes.

$ grep -w "is not" haiku.txt

We’ve now seen that you don’t have to have quotes around single words, but it is useful to use quotes when searching for multiple words. It also helps to make it easier to distinguish between the search term or phrase and the file being searched. We will use quotes in the remaining examples.

Another useful option is -n, which numbers the lines that match:

5:With searching comes loss 9:Yesterday it worked 10:Today it is not working

Here, we can see that lines 5, 9, and 10 contain the letters ‘it’.

We can combine options (i.e. flags) as we do with other Unix commands. For example, let’s find the lines that contain the word ‘the’. We can combine the option -w to find the lines that contain the word ‘the’ and -n to number the lines that match:

$ grep -n -w "the" haiku.txt

2:Is not the true Tao, until 6:and the presence of absence:

Now we want to use the option -i to make our search case-insensitive:

$ grep -n -w -i "the" haiku.txt

1:The Tao that is seen 2:Is not the true Tao, until 6:and the presence of absence:

Now, we want to use the option -v to invert our search, i.e., we want to output the lines that do not contain the word ‘the’.

$ grep -n -w -v "the" haiku.txt

1:The Tao that is seen 3:You bring fresh toner. 4: 5:With searching comes loss 7:"My Thesis" not found. 8: 9:Yesterday it worked 10:Today it is not working 11:Software is like that.

If we use the -r (recursive) option, grep can search for a pattern recursively through a set of files in subdirectories.

Let’s search recursively for Yesterday in the shell-lesson-data/exercise-data/writing directory:

./LittleWomen.txt:"Yesterday, when Aunt was asleep and I was trying to be as still as a ./LittleWomen.txt:Yesterday at dinner, when an Austrian officer stared at us and then ./LittleWomen.txt:Yesterday was a quiet day spent in teaching, sewing, and writing in my ./haiku.txt:Yesterday it worked

grep has lots of other options. To find out what they are, we can type:

Usage: grep [OPTION]... PATTERN [FILE]... Search for PATTERN in each FILE or standard input. PATTERN is, by default, a basic regular expression (BRE). Example: grep -i 'hello world' menu.h main.c Regexp selection and interpretation: -E, --extended-regexp PATTERN is an extended regular expression (ERE) -F, --fixed-strings PATTERN is a set of newline-separated fixed strings -G, --basic-regexp PATTERN is a basic regular expression (BRE) -P, --perl-regexp PATTERN is a Perl regular expression -e, --regexp=PATTERN use PATTERN for matching -f, --file=FILE obtain PATTERN from FILE -i, --ignore-case ignore case distinctions -w, --word-regexp force PATTERN to match only whole words -x, --line-regexp force PATTERN to match only whole lines -z, --null-data a data line ends in 0 byte, not newline Miscellaneous: ... ... ...

Which command would result in the following output:

and the presence of absence:

  1. grep "of" haiku.txt
  2. grep -E "of" haiku.txt
  3. grep -w "of" haiku.txt
  4. grep -i "of" haiku.txt

The correct answer is 3, because the -w option looks only for whole-word matches. The other options will also match ‘of’ when part of another word.

grep’s real power doesn’t come from its options, though; it comes from the fact that patterns can include wildcards. (The technical name for these is regular expressions, which is what the ‘re’ in ‘grep’ stands for.) Regular expressions are both complex and powerful; if you want to do complex searches, please look at the lesson on our website. As a taster, we can find lines that have an ‘o’ in the second position like this:

$ grep -E "^.o" haiku.txt

You bring fresh toner. Today it is not working Software is like that.

We use the -E option and put the pattern in quotes to prevent the shell from trying to interpret it. (If the pattern contained a *, for example, the shell would try to expand it before running grep.) The ^ in the pattern anchors the match to the start of the line. The . matches a single character (just like ? in the shell), while the o matches an actual ‘o’.

Leah has several hundred data files saved in one directory, each of which is formatted like this:

2012-11-05,deer,5 2012-11-05,rabbit,22 2012-11-05,raccoon,7 2012-11-06,rabbit,19 2012-11-06,deer,2 2012-11-06,fox,4 2012-11-07,rabbit,16 2012-11-07,bear,1

She wants to write a shell script that takes a species as the first command-line argument and a directory as the second argument. The script should return one file called <species>.txt containing a list of dates and the number of that species seen on each date. For example using the data shown above, rabbit.txt would contain:

2012-11-05,22 2012-11-06,19 2012-11-07,16

Below, each line contains an individual command, or pipe. Arrange their sequence in one command in order to achieve Leah’s goal:

cut -d : -f 2 > | grep -w $1 -r $2 | $1.txt cut -d , -f 1,3

Hint: use man grep to look for how to grep text recursively in a directory and man cut to select more than one field in a line.

An example of such a file is provided in shell-lesson-data/exercise-data/animal-counts/animals.csv

grep -w $1 -r $2 | cut -d : -f 2 | cut -d , -f 1,3 > $1.txt

Actually, you can swap the order of the two cut commands and it still works. At the command line, try changing the order of the cut commands, and have a look at the output from each step to see why this is the case.

You would call the script above like this:

$ bash count-species.sh bear .

You and your friend, having just finished reading Little Women by Louisa May Alcott, are in an argument. Of the four sisters in the book, Jo, Meg, Beth, and Amy, your friend thinks that Jo was the most mentioned. You, however, are certain it was Amy. Luckily, you have a file LittleWomen.txt containing the full text of the novel (shell-lesson-data/exercise-data/writing/LittleWomen.txt). Using a for loop, how would you tabulate the number of times each of the four sisters is mentioned?

Hint: one solution might employ the commands grep and wc and a |, while another might utilize grep options. There is often more than one way to solve a programming task, so a particular solution is usually chosen based on a combination of yielding the correct result, elegance, readability, and speed.

for sis in Jo Meg Beth Amy do echo $sis: grep -ow $sis LittleWomen.txt | wc -l done

Alternative, slightly inferior solution:

for sis in Jo Meg Beth Amy do echo $sis: grep -ocw $sis LittleWomen.txt done

This solution is inferior because grep -c only reports the number of lines matched. The total number of matches reported by this method will be lower if there is more than one match per line.

Perceptive observers may have noticed that character names sometimes appear in all-uppercase in chapter titles (e.g. ‘MEG GOES TO VANITY FAIR’). If you wanted to count these as well, you could add the -i option for case-insensitivity (though in this case, it doesn’t affect the answer to which sister is mentioned most frequently).

While grep finds lines in files, the find command finds files themselves. Again, it has a lot of options; to show how the simplest ones work, we’ll use the shell-lesson-data/exercise-data directory tree shown below.

. ├── animal-counts/ │   └── animals.csv ├── creatures/ │   ├── basilisk.dat │   ├── minotaur.dat │   └── unicorn.dat ├── numbers.txt ├── proteins/ │   ├── cubane.pdb │   ├── ethane.pdb │   ├── methane.pdb │   ├── octane.pdb │   ├── pentane.pdb │   └── propane.pdb └── writing/ ├── haiku.txt └── LittleWomen.txt

The exercise-data directory contains one file, numbers.txt and four directories: animal-counts, creatures, proteins and writing containing various files.

For our first command, let’s run find . (remember to run this command from the shell-lesson-data/exercise-data folder).

. ./writing ./writing/LittleWomen.txt ./writing/haiku.txt ./creatures ./creatures/basilisk.dat ./creatures/unicorn.dat ./creatures/minotaur.dat ./animal-counts ./animal-counts/animals.csv ./numbers.txt ./proteins ./proteins/ethane.pdb ./proteins/propane.pdb ./proteins/octane.pdb ./proteins/pentane.pdb ./proteins/methane.pdb ./proteins/cubane.pdb

As always, the . on its own means the current working directory, which is where we want our search to start. find’s output is the names of every file and directory under the current working directory. This can seem useless at first but find has many options to filter the output and in this lesson we will discover some of them.

The first option in our list is -type d that means ‘things that are directories’. Sure enough, find’s output is the names of the five directories (including .):

. ./writing ./creatures ./animal-counts ./proteins

Notice that the objects find finds are not listed in any particular order. If we change -type d to -type f, we get a listing of all the files instead:

./writing/LittleWomen.txt ./writing/haiku.txt ./creatures/basilisk.dat ./creatures/unicorn.dat ./creatures/minotaur.dat ./animal-counts/animals.csv ./numbers.txt ./proteins/ethane.pdb ./proteins/propane.pdb ./proteins/octane.pdb ./proteins/pentane.pdb ./proteins/methane.pdb ./proteins/cubane.pdb

Now let’s try matching by name:

We expected it to find all the text files, but it only prints out ./numbers.txt. The problem is that the shell expands wildcard characters like * before commands run. Since *.txt in the current directory expands to ./numbers.txt, the command we actually ran was:

$ find . -name numbers.txt

find did what we asked; we just asked for the wrong thing.

To get what we want, let’s do what we did with grep: put *.txt in quotes to prevent the shell from expanding the * wildcard. This way, find actually gets the pattern *.txt, not the expanded filename numbers.txt:

./writing/LittleWomen.txt ./writing/haiku.txt ./numbers.txt

ls and find can be made to do similar things given the right options, but under normal circumstances, ls lists everything it can, while find searches for things with certain properties and shows them.

As we said earlier, the command line’s power lies in combining tools. We’ve seen how to do that with pipes; let’s look at another technique. As we just saw, find . -name "*.txt" gives us a list of all text files in or below the current directory. How can we combine that with wc -l to count the lines in all those files?

The simplest way is to put the find command inside $():

$ wc -l $(find . -name "*.txt")

21022 ./writing/LittleWomen.txt 11 ./writing/haiku.txt 5 ./numbers.txt 21038 total

When the shell executes this command, the first thing it does is run whatever is inside the $(). It then replaces the $() expression with that command’s output. Since the output of find is the three filenames ./writing/LittleWomen.txt, ./writing/haiku.txt, and ./numbers.txt, the shell constructs the command:

$ wc -l ./writing/LittleWomen.txt ./writing/haiku.txt ./numbers.txt

which is what we wanted. This expansion is exactly what the shell does when it expands wildcards like * and ?, but lets us use any command we want as our own ‘wildcard’.

It’s very common to use find and grep together. The first finds files that match a pattern; the second looks for lines inside those files that match another pattern. Here, for example, we can find txt files that contain the word “searching” by looking for the string ‘searching’ in all the .txt files in the current directory:

$ grep "searching" $(find . -name "*.txt")

./writing/LittleWomen.txt:sitting on the top step, affected to be searching for her book, but was ./writing/haiku.txt:With searching comes loss

The -v option to grep inverts pattern matching, so that only lines which do not match the pattern are printed. Given that, which of the following commands will find all .dat files in creatures except unicorn.dat? Once you have thought about your answer, you can test the commands in the shell-lesson-data/exercise-data directory.

  1. find creatures -name "*.dat" | grep -v unicorn
  2. find creatures -name *.dat | grep -v unicorn
  3. grep -v "unicorn" $(find creatures -name "*.dat")
  4. None of the above.

Option 1. is correct. Putting the match expression in quotes prevents the shell expanding it, so it gets passed to the find command.

Option 2 is also works in this instance because the shell tries to expand *.dat but there are no *.dat files in the current directory, so the wildcard expression gets passed to find. We first encountered this in episode 3.

Option 3 is incorrect because it searches the contents of the files for lines which do not match ‘unicorn’, rather than searching the file names.

We have focused exclusively on finding patterns in text files. What if your data is stored as images, in databases, or in some other format?

A handful of tools extend grep to handle a few non text formats. But a more generalizable approach is to convert the data to text, or extract the text-like elements from the data. On the one hand, it makes simple things easy to do. On the other hand, complex things are usually impossible. For example, it’s easy enough to write a program that will extract X and Y dimensions from image files for grep to play with, but how would you write something to find values in a spreadsheet whose cells contained formulas?

A last option is to recognize that the shell and text processing have their limits, and to use another programming language. When the time comes to do this, don’t be too hard on the shell: many modern programming languages have borrowed a lot of ideas from it, and imitation is also the sincerest form of praise.

The Unix shell is older than most of the people who use it. It has survived so long because it is one of the most productive programming environments ever created — maybe even the most productive. Its syntax may be cryptic, but people who have mastered it can experiment with different commands interactively, then use what they have learned to automate their work. Graphical user interfaces may be easier to use at first, but once learned, the productivity in the shell is unbeatable. And as Alfred North Whitehead wrote in 1911, ‘Civilization advances by extending the number of important operations which we can perform without thinking about them.’

Write a short explanatory comment for the following shell script:

wc -l $(find . -name "*.dat") | sort -n

  1. Find all files with a .dat extension recursively from the current directory
  2. Count the number of lines each of these files contains
  3. Sort the output from step 2. numerically
  • find finds files with specific properties that match patterns.

  • grep selects lines in files that match patterns.

  • --help is an option supported by many bash commands, and programs that can be run from within Bash, to display more information on how to use these commands or programs.

  • man [command] displays the manual page for a given command.

  • $([command]) inserts a command’s output in place.