Differences

This shows you the differences between two versions of the page.

Link to this comparison view

shell [2013/10/04 17:36]
dblume
shell [2018/09/04 12:54] (current)
dblume [Application Memory Usage]
Line 1: Line 1:
 ====== shell tips ====== ====== shell tips ======
 +
 +Here's a tutorial: [[http://tldp.org/LDP/abs/html/|Advanced Bash-Scripting Guide]].
 +===== Quick Tips =====
 +
 +The [[http://www.reddit.com/r/bashtricks/comments/hdfzc/execute_previous_command_as_root/|History Expansion character]] is "!".  To search the history for a previous "scp" command and only print it, try the first line below. But if you want to interactively find that command, type ''<Ctrl>+r,scp''.
 +
 +<code bash>
 +$ !?scp?:p
 +$ ^rscp
 +</code>
 +
 +===== bash expansion =====
 +
 +<code bash>
 +$ cp file{,.bk}
 +</code>
 +
 +expands to
 +
 +<code bash>
 +$ cp file file.bk
 +</code>
 +
 +Replace all files that end with .JPG to .jpeg
 +
 +<code bash>
 +for file in *.JPG; do mv $file ${file%.JPG}.jpeg; done
 +for file in *.JPG; do mv $file ${file/JPG/jpeg}; done
 +</code>
 +
 +Then there are two different "rename" commands:
 +
 +<code bash>
 +rename .JPG .jpg *.JPG 
 +rename "s/JPG/jpg/" *.JPG 
 +</code>
 +
 +
 +===== Command Template =====
 +
 +Here's a template for shell commands that demonstrates a number of arguments, length of argument, etc.  It could still stand a bit of clean-up according to the [[http://google-styleguide.googlecode.com/svn/trunk/shell.xml|Google Shell Style Guide]].
 +
 +Another good resource is [[http://robertmuth.blogspot.com/2012/08/better-bash-scripting-in-15-minutes.html|Better Bash Scripting in 15 minutes]].
 +
 +<code bash>
 +#!/usr/bin/env bash
 +set -eu -o pipefail # See: https://sipb.mit.edu/doc/safe-shell/
 +
 +declare -r SCRIPT_NAME=$(basename "$BASH_SOURCE")
 +
 +## exit the shell (default status code: 1) after printing the message to stderr
 +die() {
 +    echo >&2 "$1"
 +    exit ${2-1}
 +}
 +
 +## the options used by this script
 +DISK=e
 +declare -i VERBOSE=0
 +
 +## exit the shell (with status 2) after printing the message
 +usage() {
 +    echo "\
 +$SCRIPT_NAME -hv [Drive Letter] (default: $DISK)
 +    -h      Print this help text
 +    -v      Enable verbose output
 +"
 +    exit 2;
 +}
 +
 +## Process the options
 +while getopts "hv" OPTION
 +do
 +  case $OPTION in
 +    h) usage;;
 +    v) VERBOSE=1;;
 +    \?) usage;;
 +  esac
 +done
 +
 +## Process the arguments
 +shift $(($OPTIND - 1))
 +
 +if [ $# -eq 0 ]; then
 +    : # Let the default be used
 +elif [ $# -eq 1 ]; then
 +    if [ ${#1} -eq 1 ]; then
 +        DISK=$1
 +    else
 +        # 64 is EX_USAGE from sysexits.h
 +        die "$SCRIPT_NAME: Drive Letter can only be one character long." 64
 +    fi
 +else
 +    usage;
 +fi
 +
 +## Lock this if only one instance can run at a time
 +# UNIQUE_BASE=${TMPDIR:-/tmp}/"$SCRIPT_NAME".$$
 +LOCK_FILE=${TMPDIR:-/tmp}/"$SCRIPT_NAME"_"$DISK".lock
 +if [ -f "$LOCK_FILE" ]; then
 +  die "$SCRIPT_NAME is already running. ($LOCK_FILE was found.)"
 +fi
 +trap "rm -f $LOCK_FILE" EXIT
 +touch $LOCK_FILE
 +
 +## The main work of this script
 +
 +if [ ! -d /cygdrive/"$DISK"/backup/Users ]; then
 +    mkdir -p /cygdrive/"$DISK"/backup/Users
 +fi
 +
 +((VERBOSE==1)) && echo "Starting at $(date)"
 +rsync /cygdrive/c/Users/me /cygdrive/"$DISK"/backup/Users
 +
 +# We add "|| true" because we don't want to stop 
 +# if the directory was already empty
 +rm -r /cygdrive/c/Users/me/tmp/* || true
 +
 +# Note how we find the number of cores to use
 +make -C build_subdirectory all -j$(grep -c ^processor /proc/cpuinfo)
 +</code>
 +
 +===== Miscellaneous Shell Tips =====
 +
 +If you want a single column of just the file and path names, you can get it like so:
 +
 +<code bash>
 +ls --format=single-column
 +</code>
 +
 +But if you don't know what you're doing, you might construct something like so:
 +
 +<code bash>
 +ls -Al | tr -s ' ' | cut -d ' ' -f10-
 +</code>
 +
 +  - List "almost all" items in "long" format (one line per item)
 +  - Squeeze repeats of the space character
 +  - Cut out everything from before the 10th column and show everything afterwards.
 +
 +Of course, if you could assert the following:
 +
 +  * none of the first columns were repeats (awk would only identify the first repeated column)
 +  * the desired column didn't have delimiters in it (filenames with spaces)
 +
 +...you could use awk
 +
 +<code bash>
 +... | awk '{print $10}'
 +</code>
 +
 +Anyway, given a list of directories, they can be inserted into a cp command with xargs if you need.
 +
 +<code bash>
 +cat list_of_directories_at_one_level.txt | xargs -I {} cp -r $SOURCEDIRPREFIX:{} $DEST
 +</code>
  
 Useful bash command for finding strings within python files... Useful bash command for finding strings within python files...
Line 5: Line 161:
 <code bash> <code bash>
 find . -name \*.py -type f -print0 | xargs -0 grep -nI "timeit" find . -name \*.py -type f -print0 | xargs -0 grep -nI "timeit"
 +find . -type f \( -name \*.[ch]pp -or -name \*.[ch] \) -print0 | xargs -0 grep -nI printf
 </code> </code>
  
Line 93: Line 250:
 <code bash> <code bash>
 seq 1 50 | xargs -I{} -n1 echo '{} Hello World!' seq 1 50 | xargs -I{} -n1 echo '{} Hello World!'
 +</code>
 +
 +When you've set up Perforce to use an application for diff with ''export P4DIFF='vim -d' '', you can still do a regular diff like so:
 +
 +<code bash>
 +$ P4DIFF=; p4 diff hello-world.cpp
 +</code>
 +
 +It's [[http://stackoverflow.com/questions/47007/determining-the-last-changelist-synced-to-in-perforce|hard to be sure which Perforce changelist you sync'ed if you didn't explicitly sync to a changelist]].
 +
 +So, use ''p4_sync'' to sync to a specific changelist, and update a source file too.
 +
 +<code bash>
 +p4_sync() {
 +    p4 changes -s submitted -m1 ... | tee p4_sync_to_change.txt 
 +    changelist=`cut -d " " -f 2 p4_sync_to_change.txt`
 +    changelist_filename=changelist.h
 +    p4 sync ...@$changelist
 +    if [ -w $changelist_filename ]
 +    then
 + sed -i 's/"[0-9]\+";/"'$changelist'";/' $changelist_filename
 +    fi
 +}
 +</code>
 +
 +Note the use of $@ vs "$*" in the next function that automatically saves an archive of a telnet session. Also note that I remove spaces and colons. (Colons because they screw with opening files directly at line numbers).
 +
 +<code bash>
 +telnet_log() {
 +    curtime=$(date -Iseconds | tr : .)
 +    args=$(echo "$*" | tr ' ' '_')
 +    telnet $@ | tee $HOME/telnetlog/$args\_${curtime::-5}.log
 +}
 +
 +last_telnet_log() {
 +    ls -d1t $HOME/telnetlog/* | head -n 1
 +}
 +</code>
 +
 +Of course if you do that, you'll want to occasionally (via cronjob?) delete old archives.
 +
 +<code>
 +find $HOME/telnetlog/ -type f -mtime +6 -delete
 </code> </code>
  
Line 113: Line 313:
 </code> </code>
  
 +====== expect tips ======
 +
 +What to do when it's not sure you're going to make a connection?
 +
 +<code>
 +set times 0
 +set made_connection 0
 +set timeout 120
 +while { $times < 2 && $made_connection == 0 } {
 +    spawn nc $SERVER
 +    send "\r"
 +    expect {
 +        "login:" {
 +            send "john.doe\r"
 +            set made_connection 1
 +        } eof {
 +            sleep 1s
 +            set times [ expr $times + 1 ]
 +        } timeout {
 +            puts "Didn't expect to timeout."
 +            exit
 +        }
 +    }
 +}
 +</code>
 +
 +I think the following is wrong-headed. It's not usually the case that spawn will fail.
 +
 +<code>
 +set times 0;
 +while { $times < 2 && $made_connection == 0 } {
 +    if { [ catch { spawn nc $SERVER } pid ] } {
 +        set times [ expr $times + 1 ];
 +        sleep 1s;
 +    } else {
 +        set made_connection 1
 +    }
 +}
 +</code>
 +
 +====== Perl tips ======
 +
 +The module ''Search::Dict'' has a "''look''" function that can be used to do a binary search in an ordered dictionary file (a logfile (or log file) that starts with timestamps works). ''File::SortedSeek'' might also be recommended.
 +
 +====== Application Memory Usage ======
 +
 +Use VM Resident Set Size.  See VmRSS below. (Note the [[http://stackoverflow.com/questions/10400751/how-do-vmrss-and-resident-set-size-match|difference between RSS and VmRSS]]. If one process has memory mapped, it's not usable any any other process)
 +
 +<code bash>
 +host:# ps -ef | grep etflix
 +default   1532  1081  6 22:06 ?        00:01:21 pkg_/metflix 
 +root      2108  1046  0 22:26 ?        00:00:00 grep etflix
 +host:# pidof netflix
 +1532
 +host:# cat /proc/1532/status
 +Name:   MAIN
 +...
 +Groups:
 +VmPeak:   220776 kB
 +VmSize:   210096 kB
 +VmLck:         0 kB
 +VmHWM:     95168 kB
 +VmRSS:     74488 kB
 +...
 +</code>
 +
 +Or, while running an application, to see how much is free over time, do this from another shell:
 +<code bash>
 +while [ 1 ]
 +do
 +    free -m | grep Mem
 +    sleep 3
 +done
 +</code>
 +Alternatively, to see the RSS use of that process alone:
 +<code bash>
 +while true; do sync; cat /proc/$(pidof yourprocess)/status | grep VmRSS; sleep 1; done
 +</code>
 +====== Measuring Available Memory ======
 +
 +This note doesn't entirely make sense to me. Maybe need to study up on "cat /proc/meminfo" vs. "cat /proc/vmstat" vs. "vmstat".
 +
 +The best measure I've found for "available memory" is nr_inactive_file_pages+nr_active_file_pages+nr_free_pages from /proc/vmstat. And then you have to subtract out some heuristically determined value which is base system working set. (That heuristically determined value can be 30-40MB.)
 +
 +The command ''free'' just isn't a great indicator in general of how much memory is available because it doesn't account for the cached  file-backed pages that could be dumped to make more memory available.
 +====== Shared Memory Usage ======
 +
 +To increase limit to 256MB from command line:
 +
 +<code bash>
 +echo "268435456" > /proc/sys/kernel/shmmax
 +echo "268435456" > /proc/sys/kernel/shmall
 +</code>
 +
 +Or, edit /etc/sysctl.conf:
 +
 +<code>
 +kernel.shmmax= 268435456
 +kernel.shmall= 268435456
 +</code>
 +
 +====== Performance Metrics ======
 +
 +  * Use [[http://man7.org/linux/man-pages/man1/perf-timechart.1.html|perf-timechart]]
 +  * [[https://github.com/gperftools/gperftools|gperftools]]
 +
 +And you can scrape logs that start with timecodes to create Spreadsheet charts. Given logs like:
 +
 +<code>
 +2016-10-13 19:54:44  memory 22a4
 +</code>
 +
 +On a Macintosh:
 +
 +<code bash>
 +grep memory devicelogs.txt | tr -s ' ' | cut -d " " -f 1,2,4 | \
 +sed 's/\([0-9\-]\+\) \([0-9:]\+\).[0-9]\+ \([0-9a-f]\+\)/\1,\2,=DATEVALUE("\1")+TIMEVALUE("\2"),=HEX2DEC("\3")/' >  heapinfo.csv; \
 +open heapinfo.csv -a "Microsoft Excel"
 +</code>
 +
 +And on Linux, instead of opening Microsoft Excel, that last line would be:
 +
 +<code bash>
 +libreoffice --calc heapinfo.csv
 +</code>
 +
 +====== Cron ======
 +
 +Keep tasks serialized with [[https://linux.die.net/man/1/flock|flock(1)]]:
 +
 +    (
 +         flock -n 9 || exit 1
 +         # ... commands executed under lock ...
 +    ) 9>/var/lock/mylockfile
 +
 +
 +====== Retrieving Symbols with addr2line ======
 +
 +You can gather a backtrace (stacktrace) with this piped command to addr2line.
 +
 +    $ cat << EOF | cut -d " " -f 3 | tr -d "[]" | \
 +      addr2line -e builds/austin/src/platform/gibbon/netflix | \
 +      xargs -d '\n' realpath --relative-to=.
 +    > 7/22 app() [0xf7878] (0xf7878)
 +    > 8/22 app() [0x39c2f8] (0x39c2f8)
 +    > 9/22 app() [0xe1964] (0xe1964)
 +    > EOF
 +    src/Application.h:106 (discriminator 3)
 +    src/platform/main.cpp:521
 +    src/Application.cpp:95
 +    
 +====== Sort by Frequency ======
 +
 +I ran the following P4 command to find out who's been editing a file recently:
 +
 +    $ find . -name fname.cpp | xargs p4 filelog -s -m 10 | grep -P "^\.\.\. #" | \
 +      awk '{print $9}' | cut -d @ -f 1 | sort | uniq -c | sort -nr
 +====== Additional Keywords ======
 +
 +Linux, Unix, *nix
shell.1380933391.txt.gz · Last modified: 2013/10/04 17:36 by dblume
 
Recent changes RSS feed Driven by DokuWiki