Bash script to remove string from all text files, recursively
location: linuxquestions.com - date: April 26, 2012
sorry for bothering, I've tried to search on the forum and with Google but I haven't found anything useful; I believe it's just me that can't put 2 and 2 together.
Long story short: someone hacked my hosting space and added this code: http://pastebin.com/wqrLXUG4 in the first line of all text files (i.e. .php and not only), and I'm trying to find a way to remove it with a bash script without having to manually check every single file in my web space.
What I'd like to achieve is to run this script in the root folder of my account and make it delete this code from all text files in all the various folders and subfolders.
Unfortunately, that code is not the only thing in the first line, since it's simply appended before the text of the actual first line of the file, so I can't just remove the first line from every text file, otherwise I'll delete the good code in the first line of the file, too.
I've never really understood the syntax for regexps and I've little to none knowledge of
[SOLVED] Bash script to read through text files and find largest number in files
location: linuxquestions.com - date: September 13, 2012
I have a number of text files entitled spec1.txt, spec2.txt, etc and I would like a script to read through each file and find the largest number across all fines. For instance if the largest number in file 1 was 2, largest in file 2 was 7 and largest in file 3 was 4, then I would like the script to tell me that 7 is the largest number.
I'm new to bash, previous scripts have mostly been for moving files around directories and renaming files, and I've done a bit of reading around for this problem but can't quite figure it out.
I found http://www.bashguru.com/2010/05/how-...-in-shell.html and tried 'Method 2' as a first step in my problem, but I can't get it to work, I get an error saying there's no such file or directory. Initially I got an error saying "$I: ambiguous redirect" and I'm not sure what's changed since but never mind. Another point to mention is that the value I'm after is the second parameter in each file, separated from the first by white space. This is my
Bash script help appending lines to files
location: ubuntuforums.com - date: June 11, 2013
Background: I have a bunch of filenames named username.sub in single letter directories under script_testing (first letter of username is the folder name). For every username.sub, I need to check if the line user.$username.contacts\t exists and, if not, append the line. I am having troubles with my code thus far. This should be a rather simple thing, I think. I am currently testing with 4 username.sub files in subdirectory "g." I have gone through many changes, this is how the script currently stands:
Path_to_files=$Path_to_files$( find $HOME/script_testing/ -type d -printf ":%p" )
sub_files=$Path_to_files$( find . -type f *.sub)
if grep user.$username.Contacts $username.sub; then
echo "Contacts already subscribed"
echo "subscribing to contacts"
echo -e "user.$username.Contacts\t" >> $username.sub
bash script for execute all the files from the folder
location: ubuntuforums.com - date: February 18, 2014
I m fresher in Ubuntu script writing. I want to write a code which can read a file at a time, execute it and store the output in other folder.
ex: Folder input have the different sound files with wave format
Bash ambiguous redirect redirect to multiple files
location: linuxexchange.com - date: August 8, 2013
$ echo "" > /home/jem/rep_0[1-3]/logs/SystemOut.log
bash: /home/jem/rep_0[1-3]/logs/SystemOut.log: ambiguous redirect
Can I redirect to multiple files at a time?
Edit: Any answer that allows use of the ambiguous file reference?
Bash Script for Moving X number files from /direct1 to /direct2?
location: linuxquestions.com - date: January 29, 2010
Hi guys. I have no Bash skills but Im badly in need of a script to move the first 20,000 or whatever number of files from a directory containing over 200,000 files to a new directory. The problem is that I cant access the directory because its so large so I want to break it into chunks, but keep the files in order if possible. If you know of a script to meet my needs, please post it. Otherwise Ill post my fumblings with Bash until I find the right way to fix my problem.
Should I use
Bash script to email all JPG files within a specific folder [closed]
- date: January 1, 1970
I am using the software Motion on my Raspberry Pi (Ubuntu) to connect to my network security camera. When this camera detects movement, it dumps JPGs every second into a /tmp/camera folder on this Ubuntu machine.
Motion allows you to run a custom Bash script either after each picture is saved (ever second) or after movement has stopped (at the end of all pictures).
What I want is to send these images to my phone (and eventually FTP them too). Currently, I am using the option to run a script on EVERY picture save, using 'mail' on Ubuntu to attach the recently saved file. This doesnt work very well because one 'movement' might have 10 image frames, which means I get 10 different emails.
This current script is simply: on_picture_save echo "Motion Detected at %Y-%m-%d %T" | mail -a %f -s "Subject [email protected]
So I was thinking I need a custom Bash script that I set to run at the end of the motion being detected. It needs to attach all JPGs from a given folder, (not zipped or I
using date command in bash script: next occurrence of 3pm
location: linuxexchange.com - date: August 24, 2013
I would like, in epoch time, "the next occurrence of 3pm".
The examples here:
and the manual here:
do not seem to cover this case.
To be clear, if the current date is August 20th at 5pm, I would like this to return the epoch seconds of August 21st at 3pm. If the current date is August 20th at 2pm, I would like this to return the epoch seconds of August 20th at 3pm.
I tried the following but it always returns 3pm of today:
date -d "15:00:00" +%s
Bash Script: count unique lines in file
location: linuxexchange.com - date: January 1, 1970
I have a large file (millions of lines) containing IP addresses and ports from a several hour network capture, one ip/port per line. Lines are of this format:
There is an entry for each packet I received while logging, so there are a lot of duplicate addresses. I'd like to be able to run this through a shell script of some sort which will be able to reduce it to lines of the format
where count is the number of occurrences of that specific address (and port). No special work has to be done, treat different ports as different addresses.
So far, I'm using this command to scrape all of the ip addresses from the log file:
grep -o -E [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+(:[0-9]+)? ip_traffic-1.log > ips.txt
From that, I can use a fairly simple regex to scrape out all of the ip addresses that were sent by my address (which I don't care about)
I can then use the following to extract the unique entries:
Bash Script Passing command string between variables
location: linuxquestions.com - date: April 5, 2010
I'm writing a script to quickly mount/unmount specific/all usb devices I plug into my laptop. I know this is a buggy way to do essentially reinvent the wheel, but I'm just practicing.
What I'd like to do:
Initially, I wanted to do what I thought was a simple thing. I wanted to attach a string to a command executed further down in the script. For example appending "MNTOPT_NTFS='-o "umask=002,utf8" -t ntfs3g -L'" to a mount command executed in a function. It ends up looking like "mount $MNTOPT_NTFS $2 $MNTDIR/$2" or something to that effect. Mount did not seem to parse the appended string correctly.
Next, I tried to store the entire mount command in a variable to be parsed later in my mount function. The problem is that the variables present within the $MNTCMD variable are not correctly parsed when $MNTCMD is called. Have a look at the script following the "vfat" condition.
Anyway, here is the script:
Page: 1 2 3 4 5 6 7 8 9 10