E: Internal Error, No file name for libtinfo5
location: ubuntuforums.com - date: October 21, 2012
Alright, so I upgraded my Ubuntu 12.04 to Ubuntu 12.10 (over ssh). Now before you guys all start bashing me for upgrading over ssh, I knew the risks and I took them. My system works fine-ish, but there are some problems. I cannot install any packages through aptitude, no matter what I try to do, I always end up getting the same error:
E: Internal Error, No file name for libtinfo5
sudo apt-get -f install
sudo apt-get -f autoremove
sudo apt-get -f upgrade
All of them show me that there is something that needs to be removed or installed, but the process halts as soon as the libtinfo5 error appears.
[SOLVED] How do you find and remove largest file without having to manually type the file name
location: linuxquestions.com - date: November 23, 2011
Hi I'm only starting to learn Linux and i am stuck on this problem
I have found the largest file by using
ls -S|head -1
But i don't know how to remove this file without manually typing the file name.
Getting file name from inode number
location: linuxquestions.com - date: February 24, 2007
Hello again! I have a question. I am writing a program in C. If I have a filename, I can find its inode number by using struct stat. Is there a function that works the other way around? I mean, if I have the inode number, can I find the filename? Or, at least, ONE of the filenames? I know that there can be multiple filenames that have the same inode number, but they actually refer to the same file in the file system.
What I want to do is create hard links to inode numbers. For example, if there is a file file1.txt with inode number 123456, I want to create another file called file2.txt with the same inode number. I have a variable that holds the initial inode number. Searching the man pages, I found 'link' command:
link(const char *oldpath, const char *newpath)
I have the newpath, but I want to find the oldpath. I have the inode number. Any ideas?
Thanks in advance.
Wgetchange downloaded file name, pipe?
location: ubuntuforums.com - date: June 19, 2008
I'm writing a program in python that uses wget to download images to a directory and change the file name to conform to a certain pattern. I've read through the wget manual but cannot find a way to change the name of the downloaded file.
Also, is there a way to pipe the output of wget to do that? That would be extremely convenient.
Undo a file rename which I accidentally delete the file name
location: ubuntuforums.com - date: September 3, 2009
Anyway to undo a file rename which I accidentally delete the file name ?
I tried press numerous time Ctrl+Z but still it doesn't undo.
[SOLVED] CLI Issues Annoying File name: \033[A\033[B\b\b , delete how?
location: linuxquestions.com - date: July 23, 2010
First, please no giggling at seeing some Fortran source code. I would much rather be writing in python or some more modern language. Not my choice.
My issue is, however, that during the course of some Fortran execution, a file was created with some non-printable characters and some escape sequences. How do I delete this file? Here is a copy of my 'ls -l' output:
wget preserve file name and incremental downloading
location: ubuntuforums.com - date: March 20, 2010
Dear Ubuntu users,
I'm having a little problem with figuring out how to make wget behave the way I want it to. My *nix experience is somewhat limited, but I have a fair assumption that wget can perform the following tasks, I just don't know how.
1) Preserving file names
When downloading a dynamic URL with wget (ex. http://www.example.com/download.php?did=232), the file gets the name "download.php?did=232" instead of File001.doc which Firefox's download manager saves it as. How can I make wget keep the original file name?
2) Incremental/decremental downloading
First of all, I would like to point out an existing program I use for Firefox. It's called URL Flipper, and it's just the thing I need.
As you can see from the picture, it's pretty self-explanatory. I'm looking for something similar, but it doesn't need to be as complex as URL Flipper. It only needs to handle decimals, and I plan to use it together with question 1) as mentioned above.
Why not just keep using URL
Display by File Name, File Size, and File Owner using ls
location: linuxquestions.com - date: August 12, 2008
I am trying to sort by single line the files by name, size and owner.
So far I have this:
Shell Script: Add Users from file
location: linuxquestions.com - date: December 6, 2004
I'm working on a project for one of my classes, and I'm totally new to shell scripts and kind of in shock right now. Anyway, my assignment is to write a simple script that adds users to the system based on usernames and passwords stored in another file (yes, passwords in plaintext - obviously security isn't an issue for the assignment).
I'm stumped on how to get the password assigned correctly. I've tried doing it with useradd -p as well as passwd (normally and by doing something like echo $PASSWORD | passwd --stdin). Apologies if this makes no sense, low on sleep due to finals week. :P
Here is my script thus far:
trying to insert JPG file into a mysql table
location: ubuntuforums.com - date: March 12, 2011
I would like to store JPG files in a mysql database table. I figured that using BLOB fields on the one hand and the 'load_file' command on the other would do the trick.
Here is how I set up the table:
mysql> create table blobtable (id int(10) not null auto_increment, fileName varchar(15) not null, file longblob not null, primary key(id) );
Query OK, 0 rows affected (0.51 sec)
then I tried to enter the data:
mysql> insert into blobtable(filename,file)values('pic',load_file('/var/www/temp/IMGP4764.JPG'));
which got me the following error signal:
ERROR 1048 (23000): Column 'file' cannot be null
It seems to me that the file path to the JPG file (although correct) is the source of the problem, and is causing the load_file command not to work.
My first attempt was with the JPG file in my PC's home directory. When that returned the same error message, I figured that maybe the file had to be in the PC's (server's) web area so I put it under the 'www' dir
Page: 1 2 3 4 5 6 7 8 9 10