Tuesday, December 20, 2011

Add styles permanently to your Blogger layout

So if you're like me, you probably write posts. These posts contain paragraphs, images, maybe dot-points, and yes, code. What many bloggers do with posted code, is give it a certain style for easier readability.
$ something like this
Up until now, everytime I wanted to do that, I first stated the style that I wanted in the HTML editing way. This was done by putting this at the start of every post that I needed to:
<style type="text/css">
pre.source-code {
  font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace;
  color: #000000;
  background-color: #eee;
  font-size: 12px;
  border: 1px dashed #999999;
  line-height: 16px;
  padding: 10px;
  overflow: auto;
  width: 100%
}
</style> 

But now, this part can be skipped and all I have to do now is just wrap the text in the appropriate tags and Blogger will do the rest. But first, the styling above needs to be added to your Blogger settings.

While logged into your Blogger account, navigate to the page that shows you all the info for the blog that you want to customize, in my case "My Natural State...".

With the 'Layout' option on the left selected, click on the 'Template Designer' text just under the blog title.



You should be taken to the template wizard that you would of gone through the first time you made the blog. Select "Advanced" -> then scroll down to "Add CSS" -> and then add you style!


That's all there is to it. Now when you make a new post, you still have to type the with the HTML tab selected up the top, otherwise it will be read as plain text. Oh...and this just means that you only have to remember the tags, luckily I only have 1, heheh.

Enjoy, ciao!

Monday, December 19, 2011

HTML5 FileAPI: Display Drag n Drop image file

Sometimes accidents are great, and as always, something can be learned. Yesterday, while trying to figure out a way to display .csv files in the web browser, I stumbled upon FileAPI, some sort of HTML5 library...or something (can't display .csv files though...I was just distracted :D). One of the uses that has become apparent to me and others is that files can be dropped straight into the web browser, and then you can read the information that it contains.

Quick video here.
FileReader Docs (by Mozilla) here, this is mainly what I will be describing.
Other links here, here and here.

So, for the rest of this post, I'm just going to paste some code so you can do what snowcrashbeta did (Video link above). All you will need is an index.html file and an associated script.js file. You don't really need to know much JS, but it helps find problems. The way I would describe whats going on is that:
  • elements on the website wait for specific actions to occur,
  • when they do (a file is dropped onto it), a js function is called which then (and depending on how you set it up) takes the file and reads the information embedded in it (filename, type, size etc.), 
  • once it is read, you can then display this information or do something else...like display the image. Somewhere along the line, the file is also uploaded, how neat is that!
Below will be the code, and MY interpretation of what's going on. Yes, there is a high chance that I could be wrong, but I did do a little bit of reading...

index.html
<head>
 <title>Drag n Drog</title>
 <meta http-equiv="content-type" content="text/html;charset=utf-8" />
 <meta name="generator" content="Geany 0.20" />
 <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script>
<style>
 div { border-radius: 20px; margin-bottom: 10px; padding: 15px; }
 #wrapper { margin: 20px auto; max-width: 600px; }
 #dropper { border: 5px dashed #ccc; height: 100px; text-align: center; font-size: 60px; color: #ccc;}
 #fileinfo { border: 5px solid black; height: 30px; }
 #filedata { border: 5px solid black; height: 250px; }
</style>
</head>

<body>
<div id="wrapper">
 <div id="dropper">Drop File Here</div>
 <div id="fileinfo">File Info</div>
 <div id="filedata"><img id="fileimage" src="" alt="Image"/></div>
</div>
 
 <script src="script.js"></script>
 
</body>
</html>

<head></head>: where the title, links to script files and styles are stated
  • <script src="https://ajax.googleapis... the location of the FileAPI (I think?)
  • div#wrapper...#dropper...#fileinfo...#filedata... CSS styles for these tags
<body> </body>: what people will see
  • <div id="dropper">... dropper element. Where users will drop files.
  • <div id="fileinfo">... fileinfo element. Where information about the file is displayed.
  • <div id="filedata">... filedata element. Holds the image since <img> tags do not have a border option (I'm sure there's a work around).
    • <img id="fileimage" src=""... within the <div id="filedata"... tags. This is where the image will be loaded after the user drops it
  •  <script src="script.js">...load our script file. Note: this can be put up the top of file where the other one was, with the effect that this file will be loaded before the rest of the page will be loaded. Also, the <script... up the top can be put here, but will be loaded after everything else.
The website should look like this. And if you drop an image onto the element, it should display in the box below. So what's going on? Read on.

script.js
$(document).ready(function() {
 var dropbox = document.getElementById("dropper");
 
 dropbox.addEventListener("dragover", dragOver, false);
 dropbox.addEventListener("drop", drop, false);
});

function dragOver(evt) {
 evt.stopPropagation();
 evt.preventDefault();
}

function drop(evt) {
 evt.stopPropagation();
 evt.preventDefault();
 
 var files = evt.dataTransfer.files;
 
 if (files.length > 0) handleFileInfo(files);
}

function handleFileInfo(files) {
 var f = files[0];
 var output = [];
 output.push('<strong>', f.name, '</strong> (', f.type || 'n/a', ') - ',
     f.size, ' bytes, last modified: ',
     f.lastModifiedDate.toLocaleDateString());
     
 $("#fileinfo").html(output.join(""));
 
 var reader = new FileReader();

 reader.onload = function (evt) {
  document.getElementById("fileimage").src = evt.target.result
  };
 reader.readAsDataURL(f);
}

$(document)...});
This is where the event listeners (self-explanatory really) for the "dropper" element are defined. Since we have defined an area where we want users to drop files, then we have to link a bit of code to certain events, the code then being called to run. "dragover" and "drop" are predefined options, and I suspect many more exist for us to experiment with :).

It is recommended that you watch that video I linked to above, that's really how I learnt this.

function dragOver(evt) {...}
This is to prevent the default action from occurring when a user drops a file into the web browser. If you try this, then the browser will try to display the file. But we want to do much more than display the image.

function drop(evt) {...}
Same as above. But, when the user drops the file, we want to do something. We can call the handleFileInfo() straight away, but what I've done here is do a quick if() statement to check that there is something in the file to process.

function handleFileInfo(files) {...} This file processor.
  • output.push(..html(output.join("")); this takes the file the user has dropped, and reads certain parameters the belong to it. It then concatenates these and sends it to "fileinfo" - which we defined in index.html.
  • var reader... not sure, but calls FileAPI and gets it ready, if it isn't already :S.
  • reader.onload...readAsDataURL(f); ... more unknown :S. But in html speak, finds the "fileimage" element and then changes its' .src to evt.target.result (which is probably where the file was uploaded to)
    • readAsDataURL(f) VI!!. This is the method in which reader. reads the files. See FileReader for more info, scroll about half way down.
Done!  There is a lot of stuff here that I don't know about, but it has definitely given me a primer for HTML5 capabilities...and some ideas for my website (which is pretty dull to say the least!).

If you would like to download these two files and give it a go, get them here:
FileAPI_DnD_example.tar.gz
Many thanks! Till next time, Ciao!

Tuesday, December 13, 2011

Mv folders with spaces

So something has happened to my 320GB external HDD and I've started to get corrupt files :(. Not good. It is a few years old but I think that my best bet would be to remove everything and reformat it. This also gives me a chance to clean up all the shit and solve problems like how to move folders that have spaces in them - since spaces tend to cause a bit of trouble for programmers.

My solution is fairly simple, probably not the best and/or shortest, but the job is now done, and I can reformat my HDD.

The steps can be summarized as follows:

1) ls the directory with all the folders that you need to move and send the output to a .txt file. Before you ask why didn't I just move the parent folder? Well, to put simply, I had errors of some sort...so I had to do it the long way
2) Using 9 lines of python and the 'os' library, read the output.txt file and add "s to the start and end of every line. Close output.txt
3) Use nawk to read the updated output.txt file and move the folders (with spaces...grr) to the desired location.

Let's get started!

So here's the code and how I've done it. For the purpose of this example, I will compare the Music folder on my media drive and the Music folder on my desktop - each of which contain several sub-folders of artists. But if you want to just mv folders, then just skip some of the steps.

1)
~$ ls ~/Music > ~/computer-ls.txt

1a)
This is optional. Since I want to compare the contents of ~/Music to /media/Music and move only those files that are in ~/Music but not in /media/Music, do this step - use diff to compare the .txt files, grep > or <, cut off the > and < (to match folder names), and then send the output to a file:
~$ ls /media/Music > ~/media-ls.txt

~$ diff ~/computer-ls.txt ~/media.txt | grep "<" | cut -b3- > move.txt
~$ diff ~/computer-ls.txt ~/media.txt | grep "<" | cut -b3- > move2.txt

When using diff, the output is read such that which ever way the arrow is pointing, then that indicates that the string is in that .txt file but not in the other, i.e. using the code above, if I get output < Ramones, then I know that the Ramones folder is in ~/computer-ls.txt but not in media-ls.txt.

Two move.txt files need to be created because later, python cannot read from a file opened with the 'w' permission and vice versa.

If you wish, review the .txt files and the folders themselves to see if it is true about which folders are missing.

2) Skip this step - python not needed.
The python bit. Python should come installed on your Linux flavour, if not, apt-get install python :). Open a python cli, then run the following command using the .txt files we just created.
~$ python
>>> fread = open('~/move.txt', 'r')
>>> fwrite = open('~/move2.txt', 'w')
>>> for line in fread:
>>>     fwrite.writelines(str("\""+line[:-1]+"\"\n"))
>>> 
>>> fread.close()
>>> fwrite.close()
>>> quit()

The move2.txt file should now have the same folder names as move.txt, but will have "s at the start and end of ever line. "like this"

3) See update below.
Nawk bit. You should have done all this in the same directory, just don't forget to clean up :P. The last command uses nawk (a file scanner which can do a lot more!) to read the move2.txt file, and then run the mv command.

~$ nawk '{ system("mv -v " $0 " /media/Music/") }' ~/move2.txt

Make sure to give the correct file - the one that we changed.

Simple break-down (since I'm still an amateur) - if easier, maybe imagine nawk is like a 'for i in array/file/variable' command:
  • nawk > the nawk app
  • '{ command }' > everything inside the single quotes and curlies is what you want to do, spacing is critical here
  • system() > tells nawk to run it in termina
  • $0 > tells nawk to get the whole line. $1, $2, $3 etc. can be used to read only a single component of a line where the components may be separated by a space (grr). i.e. if the code above used $1 and a folder was called "Black Sabbath", then the output would be Black and we would fail to mv the folder.
Done! Since we used mv -verbose, we should be able to see the files being moved over.

Hope this helps! Till next time, Ciao!

Update:
Step 2) not needed. "s can be inserted in the nawk step as follows:
~$ nawk '{ system("mv -v \"" $0 "\" /media/Music/") }' ~/move.txt

Tuesday, December 6, 2011

Installing TweetDeck on Ubuntu Linux

This one is for all the Twitts out there looking to use a dedicated Twitter client for viewing their subs/lists/trends etc. While there are several clients available and native for Ubuntu (though I am yet to try them), TweetDeck gives a great interface with easy personalization. It also has an inbuilt URL shortener and update notification which can be set to show in the top-right corner...although this can be annoying if you follow a lot of Twit-a-holics (The type I like :D)!

This HowTo will show how to install TweetDeck and the AdobeAir package (which TweetDeck depends on) for Ubuntu. This install was done on kernel version 3.0.0-13.


1) Open up terminal, and direct to a folder where you want to work. I went to my ~/Downloads folder. Then, acquire the AdobeAIRInstaller with wget - it will download to the current directory that you are in. Use ls if you want to check that it is there. Change the permissions so that it can be executed. Execute it - you should get an Adobe window popup and ask you whether you want to install. Follow the options and after about 5 clicks it should be done. Remove the installation file (or keep it). Code is below:
$ wget http://airdownload.adobe.com/air/lin/download/latest/AdobeAIRInstaller.bin
$ chmod +x ./AdobeAIRInstaller.bin
$ sudo ./AdobeAIRInstaller.bin
$ sudo rm -rf AdobeAIRInstaller.bin

If you don't know, AdobeAIR is some sort of installer program. It is used when you want to install certain programs (that run Adobe I suspect...).

Navigate to the TweetDeck website and hit Download. The AdobeAIR window should popup. Follow the prompts and change the installation directory if so desired. Done!

Hope this helped. If AdobeAIR is not installed, then clicking on the TweetDeck Download button will not work, not matter how long you wait.

Happy Tweeting!

Monday, November 7, 2011

Gear...

Need to get in gear.

Knock it up, knock it up, knock it up!

Friday, October 28, 2011

Trembling Ambition

Waiting for my departure,
There's a place that I haven't been.
Waiting for the words to finish,
There's trembling in novocaine speak.

What did I see?
The world did turn and shed a tear.
Where did I dream,
The relic of a time only we could see?

And when I wailll...
By recognition and ample time,
Oh I don't refrain,
We've recovered from a trembling ambition.
And there's time to make haste.

It doesn't take years to uncover,
But many to build it up.
To see what's yet to come,
By forgetting what brought us here.

And when I trail...
I go on and on at you.
But don't let me fail,
It's ok that we are here.
It's ok, we're making memories.

(C) me, 2011

Friday, October 7, 2011

Lyrics: Bob Dylan - My Back Pages

I feel a breeze, the air it carries a sweet tenderness about it. It is not that which compels me to proceed, rather that which tells me how. There be many tales which remain untold, that require to transcend time. But it is the words of a well known man, which cite the reasons not to be bold (insert more appropriate word here).

Bob Dylan - My Back Pages

Crimson flames tied through my ears
Rollin’ high and mighty traps
Pounced with fire on flaming roads
Using ideas as my maps
“We’ll meet on edges, soon,” said I
Proud ’neath heated brow
Ah, but I was so much older then
I’m younger than that now

Half-wracked prejudice leaped forth
“Rip down all hate,” I screamed
Lies that life is black and white
Spoke from my skull. I dreamed
Romantic facts of musketeers
Foundationed deep, somehow
Ah, but I was so much older then
I’m younger than that now

Girls’ faces formed the forward path
From phony jealousy
To memorizing politics
Of ancient history
Flung down by corpse evangelists
Unthought of, though, somehow
Ah, but I was so much older then
I’m younger than that now

A self-ordained professor’s tongue
Too serious to fool
Spouted out that liberty
Is just equality in school
“Equality,” I spoke the word
As if a wedding vow
Ah, but I was so much older then
I’m younger than that now

In a soldier’s stance, I aimed my hand
At the mongrel dogs who teach
Fearing not that I’d become my enemy
In the instant that I preach
My pathway led by confusion boats
Mutiny from stern to bow
Ah, but I was so much older then
I’m younger than that now

Yes, my guard stood hard when abstract threats
Too noble to neglect
Deceived me into thinking
I had something to protect
Good and bad, I define these terms
Quite clear, no doubt, somehow
Ah, but I was so much older then
I’m younger than that now

Copyright © 1964 by Warner Bros. Inc.; renewed 1992 by Special Rider Music

Yeah...so as you can tell I've found lyrics, and they have imprinted on me. Nevertheless, I will see how many people I can share them with.

Ciao!

Sunday, October 2, 2011

md5sum for Ubuntu USB Installer

Hey all! Just a quickie to explain the importance of md5sum checking.

When using the Ubuntu Startup Disk Creator, making bootable USB thumb drives is straight forward. The .iso file which is used to make the thumb-drive however must meet a data integrity test using the md5 algorithm. Essentially, if you decide to download a Linux distro, you should (and up until now, I haven't been) check the md5sum and that it matches the string defined on the Ubuntu Website.

The method is quite simple and will save time in the longrun. A description and method for use of md5 checksum can be read here.

To check the integrity of the data via md5, open terminal, goto the location of the linux-distro.iso, and enter the following:
$ md5sum linux-distro.iso

This may take some time depending on your computer, but I'd say no longer than 2mins. Here is an example showing the output when checking the md5sum for the distro Ubuntu 10.10:
$ md5sum ubuntu-10.10-desktop-i386.iso
59d15a16ce90c8ee97fa7c211b7673a8  ubuntu-10.10-desktop-i386.iso

And if you cross reference with the Hash Site, you will see that there is consistency.

This post was made because I have just tried to boot Ubuntu 11.04 Live on my friends laptop and was getting SQUAHFS read errors.

A seemingly n00bish mistake, but we gotta learn somehow.

Until next time. Ciao!

Wednesday, September 21, 2011

Crontab

When editing crontabs, you must edit the root cron. I found that editing the user cron file, will not run the script which you desire. Today I've read that Crontab is a file/process which acts to automate script files insync with time. The best example of what a Cronjob is, are monthly backups that people perform incase data becomes corrupt or they get a virus. Rather than doing it manually, or downloading some process heavy software to do it for you, you edit the Crontab file, and specify script/s that you want to run.

Below will be the list of code you will use + an example in which you can see if you have set up a Cronjob correctly. Yes, I'm one of those people whom enjoy peace of mind - knowing something works is better than thinking something works.

Here's what you need - but don't execute them yet, keep reading:
$ sudo crontab -e # To access the root cron file
$ sudo touch /etc/cron.d/ # To refresh the cron process with your new settings

The first time you access the crontab file, it will ask you what editor you wish to use - I use nano. I'm puzzled about the location of the crontab file as I originally thought /tmp files were deleted on shutdown. Anywho...the default crontab might look like this:

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command

The information you need to know about the Cronfile, and editing, it are in the Cronfile itself. But here's a quick run down.

You add a cronjob by using the following syntax:
* * * * * /path/to/crontest.sh

Notice there are 5 positions. From left to right they specify:
  1. Minute, 0 - 59
  2. Hour, 0 - 23
  3. Day of month, 1 - 31
  4. Month, 1 - 12
  5. Day of week, 0 (Sunday) - 6 (Saturday)
So cronjobs are on an annual cycle. The asterisks example is basically telling the computer that the cronjob is to be run every minute.

Testing Crontab/Cronjobs
Here's a quick way to test crontab.

1) Open terminal and move to a directory of your choice.

2) Make a new file and call it "crontest.sh".

3) Give crontest.sh user executable permissions. Or 755, which also enables write-perms:
$ sudo chmod u+x crontest.sh

4) In crontest.sh, add this script. This is just a script that echos the date, given the format you desire.
#!/bin/bash

date +"%d-%m-%Y" >> /path/to/the/storage/file.txt # Date ID, >> tells terminal to append to end of file

Save and exit.

5) Open up the crontab file:
$ sudo crontab -e

6) Look at the current time and edit the crontab file with a few minutes added to the current time. So, if the time is 17:40, make the crontab entry look like this:
...
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
45 17 * * 1-5 /path/to/crontest.sh

Close nano and save the changes.

7) Now refresh crontab:
sudo touch /etc/cron.d/

Giving your password where required.

8) Sit and wait. When 17:45 comes around, the script will be run. Check the /path/to/the/storage/file.txt file and see if the date is there. If yes, then your method is good. If no, then check that permissions are correct; check that the code and all pathways are right.

Hope you enjoy this HowTo. I'm probably going to make a cronjob to append CPU temps to a file such that I can graph them and see if certain parameters are better/worse for my CPU performance.

Here are two sites which I found helpful:
http://klenwell.com/press/2010/11/cron-d/
http://kevin.vanzonneveld.net/techblog/article/schedule_tasks_on_linux_using_crontab/

Enjoy, Ciao!

Tuesday, September 20, 2011

The journey of overclocking begins!

Overclocking is something that I've always wanted to do. And today, when I should have been finishing an assignment about National Park Management, I decided to read up and perform some overclocking, hehehe.

So far I've:
  • Joined overclockers.com 
  • Overclocked my desktop PC using the preloaded OC profiles on my ASUS motherboard, and
  • Crashed my desktop once -> a BSOD!!

Here's a neat little YouTube vid that I found interesting. The dude states, and it sounds as though this could be the first tenet of overclocking, and that is that the NB Frequency must be equal to or greater than the HT Frequency in order to have a stable system.

NB Frequency >= HT Frequency (But is this true? I'm wondering now.)

I'll be testing this in the next few days to see if that is the case. Also, he suggests that CPU temps that exceed 65 oC may not be very good, so keep that in mind also.

Vid1
Vid2

While it was slightly disheartening when the BSOD came up, I was able to go back to default settings. What troubled me was that when I rebooted, I had somehow lost 1 core - only 5 cores were being detected/used. So even though the settings were on default, I had lost a core. But I soon discovered that the BIOS appears to have individual core on/off settings and 1 of the cores was switched off when the problem was detected. So I turned it on and everything is fine.

But I guess I learned something, don't be in a hurry.

Computer specs:
ASUS M5A88-M Motherboard
AMD Phenom II X6 1075T
2 x 4GB Kingston RAM @ 1333MHz (need to upgrade this next)
And I suppose I should mention that I'm running most of the tests on Windoze 7....


There is a really good AMD Phenom overclocking guide here. But there are no specific posts where people show successful/stable settings. Maybe I can be the first to post one...



According to the AMD Datasheet, there are some max settings which I probably should not exeed. MT/s stands for 1 million transfers per second.

Max DDR speed: 1333 MT/s
Max HT Link speed: 4000 MT/s

Update 9/10/2011:
In regard to doing the system stability tests on Windoze, I am still as yet able to find a Linux CPU stress tester. CPU and MB temps are now docked on the top panel and this is useful. However, I'm skeptical on the accuracy as I'm almost certain that the temp is just a function of the voltage being run through particular components at any one time. As a result, the temperature factor used by Linux may be different to the temperature factor used by Windoze...

Wednesday, August 24, 2011

Blacklisting a driver

For new computers, there can be a lot of tinkering involved. Ever since I bought a new Asus desktop computer, I've been doing a lot of that - and it's been great! If only the building process took a little bit longer than 25mins :(...

Generally, I've found Ubuntu Linux great for computers. Both of my laptops have worked well - after a clean installation of Ubuntu, all my hardware is usually detected correctly. It's quite pleasant. When clean installing Windoze on the other hand - I have to hunt around the web and download the exact driver for the exact model hardware. But I won't go there.

This will be a quick blog about the Realtek LAN card on my new computer (see specs below). The problem: the driver loaded by Ubuntu, r8169, appears to be faulty once the clean installation is complete and I upgrade all necessary packages. Solution: install the driver from the Realtek website, r8168, restart /etc/init.d/networking and you should be fine.

Desktop specs:
Motherboard: ASUS M5A88-M
Realtek Card (on-board): 8111E/8168B

But, Ubuntu tries to be clever (and I enjoy it when it does :D) and resets the driver to r8169 when you restart the computer. So one solution put to me was to blacklist the r8169 and set the r8168 as the default (NB: my terminology is incorrect, r8168 may not be the correct name for the 'driver'). praseodym from UbuntuForums gave me this suggestion, and the method for implementation is at this UF thread.

Alternatively, I came across another thread which appears to be a duplicate of my problem - although worded differently. The suggestion here is to upgrade the kernel to 2.6.39 and the problem should go away.

Seeing as my system is running and has a continuous connection...I might try that another time.

Why fix it if it ain't broke?

Ciao!

Update - 28 Nov 2011:
There was an update just the other day of a few hundred meg, after which I found that I was having trouble with my connection. What I found was that the 'new' driver had been reinstalled and thus started giving me the up/down problem again. Had to back track a bit because I had forgotten just how I installed it last time.

The Realtek driver download was fine, but because I'm running kernel version 3.0, I suspect that the autorun.sh file was unable to execute properly. Did a quick search and found this website with a run down for those not on 2.4x or 2.6x linux systems.

Connection is working now. :D!

Sunday, August 14, 2011

Windows USB Bootable

Much of the last weekend was spent trying to figure out how to make a USB Flash drive bootable for a Windows installation. This is no simple task, especially if you don't have a spare Windows computer up and running. But if you do, then you are in luck.

Before I attempted to use the Windows app, I did try several Linux options. UnetBootin, the CLI dd command were both promising options (see links below). However, the system I was trying to clean install did not have the appropriate boot options and so the drive was not detected on startup. This may have been a problem that was unique to me, so I would advise giving these options a go...then as a last resort, use the Windows app (Note: You won't turn into a blood-sucking parasite for using this method, but you will feel strange afterwards).

So, yes, if you want to make a Windows 7 bootable USB Flash, then I went to the Microsoft website and downloaded it an ran it on a Windows machine. Link here. Until I can find another way to do it, this will have to do.

Ciao!

Links:
http://www.webupd8.org/2010/10/create-bootable-windows-7-usb-drive.html
Good, but didn't work for me - http://serverfault.com/questions/6714/how-to-make-windows-7-usb-flash-install-media-from-linux

Update: 20 Dec 2011

Good news everyone! lol...link to the Professor Farnsworth, because it's worth it.

But on a serious note, I have just found a a simpler way to make a Windows USB boot drive. And by simpler, I mean on the command line :P.

The video here tells it all, but I'll summarize here, for the sake of documentation. Big thanks to uniquefree, as I for one would never have figured this out.

Briefly, the method:
  1. wipes and formats the usb drive using FAT32 or NTFS file system
  2. copies the files from the Windows CD/DVD straight over to the drive
I'll have to give this a go in Linux. However, at present, I'm not having much luck reading/writing to NTFS or FAT32 usb file systems :(.

Format the USB

For this example, we want to copy files from a CD/DVD drive or even a mounted .iso image at the location D:\, to our USB which will be F:\. This will obviously be different for you.

Start by partitioning the USB, using DISKPART and the following commands:
C:\>DISKPART
DISKPART> list disk
DISKPART> select disk 3
DISKPART> clean
DISKPART> create partition primary
DISKPART> select partition 1
DISKPART> active
DISKPART> format fs=fat32
DISKPART> assign
DISKPART> exit
You still need a windows computer for this one. Open the command prompt, cmd.exe, and type in DISKPART. This program should come preinstalled. The first few commands are important. Use 'list dist' to print out all the drives that have been detected by Windows. Like uniqueparts' video, there will be corresponding number down the left hand side of the output. Find the number that corresponds to the USB drive that you want to move the files to, and use it with the next command. This is important because if you choose the wrong number, the usb drive may not work (because the files were never copied there).

Uniquepart makes this quite clear, so I thought I'd do the same. I'm wondering if the 'Quick format' option that some people are familiar with would still make a bootable usb, but as yet, the longer format option given here is the way that works.


Now, the USB is ready, copy the files over using xcopy:
C:\>xcopy D:\*.* /s/e/f F:\
And you should have it!

This method is much more appealing to me and I hope it makes installs much more pleasant for those who use Windows...hehehe.

Ciao!

Friday, July 1, 2011

Server Fstab

Just making a quick post to get this month started.

My server box has been running really well. The simplicity of the components appears to be holding up - I still haven't found a proper benchmarking tool to test it out but oh well, another day. For now, I've attached several external hard drives so that I can play them on either of my laptops. Using the smb client has proven to be very successful, even over wireless the network. My housemate states that often videos will skip when she plays them on her computer, however this is not the case on my computers so we are both assuming that her computer does not have enough processing power to play the videos. An easy solution would be to use a wired connection - and we might just do that in due time.

What I would like to post today is my current /etc/fstab file. I've finally decided to read a bit about fstab configuration and now I'm wondering why I didn't do it earlier - it's really simple! As always, there is good documentation from the Ubuntu website. There is an example of an fstab entry and as you will see, they are not much different from a mount command.

To find out information about the devices that are connected, use blkid - the output will look like the following:

sudo blkid

/dev/sdc1: LABEL="A-hard-drive" UUID="12B0WF25B0DF0KDB" TYPE="ntfs"
/dev/sdd1: LABEL="Another-hard-drive" UUID="554123D9E73ABF54" TYPE="ntfs"

Take note of the UUID and the TYPE. These will be used in your new fstab file.

Go to /etc/fstab and make a copy of the original file, just incase something goes horribly wrong. I usually just append .orig to the end of the file (fstab.orig), that way it can be found easily again later. If you append orig. to the start (orig.fstab), then you might have trouble finding it later.
sudo cp /etc/fstab /etc/fstab.orig

Open up /etc/fstab with your favourite editor, I use nano since I don't know emacs...yet :). Your original file will look a bit like this, thought it may have more entries depending on how you setup your distro.
sudo nano /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
#                
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/sda1       /               ext4    errors=remount-ro 0       1
# /home was on /dev/sda5 during installation
UUID=5efe42d5-1b11-4287-a604-diid345b63d6 /home           ext4    defaults        0       2
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

So alls you gots to do, is add your entries to the end of this file. Easy huh? From blkid before, you'll need to take the UUID and use that so the computer can identify the external HDD if it happens to be plugged in. I'm hoping that you have read the Ubuntu Docs I linked to before, and you will know that the syntax is something along the lines of:

[Device] [Mount Point] [File System Type] [Options] [Dump] [Pass]

That said, for me, I just add these two lines to the end of fstab and my configuration is complete:

UUID=12B0WF25B0DF0KDB /media/OneTouch ntfs-3g uid=user,gid=group-name,umask=0000
UUID=554123D9E73ABF54 /media/Elements ntfs-3g uid=user,gid=group-name,umask=0000

Final file:

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
#                
proc            /proc           proc    nodev,noexec,nosuid 0       0
/dev/sda1       /               ext4    errors=remount-ro 0       1
# /home was on /dev/sda5 during installation
UUID=5efe42d5-1b11-4287-a604-diid345b63d6 /home           ext4    defaults        0       2
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0
UUID=12B0WF25B0DF0KDB /media/OneTouch ntfs-3g uid=user,gid=group-name,umask=0000
UUID=554123D9E73ABF54 /media/Elements ntfs-3g uid=user,gid=group-name,umask=0000

Important! The security of your filesystems is solely up to you. I have used ntfs-3g because this allows users to write to the filesystem. More info about which is the correct filesystem to mount an external as can be read here. Also, by setting uid and gid, you can permit only certain persons to access the filesystems. I believe I have mentioned this before, but no harm in mentioning it again right? Call me paranoid, call me what you will...

Hope this guide is useful, I'm sure I'll be reading it again in the future, when my server dies or I forget what an fstab is.

Adios, till next time!

Thursday, June 23, 2011

Dialog from Kristinn Hrafnsson

Making my way towards the Griffith University, Queensland Conservation of Art on a very fine winter day has given me to opportunity, and many others, to partake in a public forum addressed by spokesman Kristinn Hrafnsson.

Dialog from the address, quoted and revised:

Snippet from leaked US-Army video -> Collateral Damage

Media partners: NY Times, Le Monde, El Pais, Aljazeera, Der Spiegel, SVT, 4, The Guardian

"...only about 50,000 cables are online."

"Often media companies are untrustworthy and individual journalists are consulted."

"Care is made not to make unnecessary harm to people."

"The cables have demystified other information."

"...that the cables will have a devastating effect on information connections between countries."

Zine El Abidine Ben Ali is the president of Tunisia whom is regarded as a valuable alli to certain parties in USA.

GITMO FILES (Guantanimo Bay Files) - an example of bad information being used to detain a number of people in conditions that do not fall under the Universal Declaration of Human Rights.

WikiLeaks has revolutionized journalism by partnering with media corps to better understand the importance of freedom of information.

"In a race between truth and secrecy, truth will always win." Kristinn says that he was impressed by these words, up until he discovered the creator of the saying, Rupert Murdoch.

WikiLeaks exists only due to the need for its' service - once Governments can become transient in relaying information to the public, WikiLeaks may not need to exist.

Saturday, June 11, 2011

Permissions for mounting Externals


When mounting Externals, often the default permission will be that of the user mounting the device. This can create a barrier when wanting to access the External over a network. So when mounting, it is good practice to give suitable permissions that meet your specific needs. Ubuntu Docs has a good page on permissions however, you probably just want a working example. So here you go.

For mount, the umask detemines the permissions and if you read the Ubuntu Docs page briefly, read, write and execute permissions are given by 4, 2 and 1 respectively. As you will see, 0000 will result in permissions = drwxrwxrwx. If you give umask = 0001, then the permissions = drwxrwxrw-. A simple concept, however, at first I thought that it was the other way around.

So here's how I would mount an External on my server for read/write access by guests/users in the specified group:
mount -t ntfs-3g -o uid=admin-user,gid=guest-user-group,umask=0000 /dev/sdc1 /media/OneTouch

And if I wanted to mount with permission for just the user who mounted it:
mount -t ntfs-3g -o uid=admin-user,gid=guest-user-group,umask=0027 /dev/sdc1 /media/OneTouch

Extra control can be given using uid and gid to determine who is the owner and which users of the group can access the External. This may take a little bit extra to tweak but can be a good security measure. To get started, use the following command to determine which group a users belongs to:
id username

For more information about mounting, there's a resource here.

Adios, till next time!

Tuesday, June 7, 2011

SwiFTP and LFTP to do some syncing

Well, I should be studying but instead I'm delving into some FTP stuff and how to do some syncing across particular folders. All this is due me wanting to make my Music listening a hell of a lot easier. At present, Winamp does not have a Linux version and this sucks because there is one feature of Winamp has just come out and is fricken awesome - Wireless Sync, pretty self explanatory.

So I need a work around.

On your Android device
At present, I'm running Gingerbread 2.3.3 on my SGS2. Download and install SwiFTP from the Android market - information here. I've found this app simple and easy to setup - thus far. Once it is fired up, put in the details that you wish to have for your FTP server (your device) - user, pass, port (anything in the 1000's usually works, but not always). Most importantly, where it says:
Stay within folder (e.g. /sdcard):
...just put a...
/
...in the box. Later on when you connect from your client, you can point LFTP to which ever folder you wish - hopefully. Some info about why you should do this here (search for 'root').

Now, I say this because there may be a SU (superuser) or root error (is you device rooted?) in which you will not be given access to the folder on your phone, even if you typed the password in correctly. There needs to be a work around for this because there are rumors that rooting your device can disallow it from using certain Provider apps, e.g. any Optus apps on my device. And frankly, once rooted, I'm not sure if factory defaults can be set.

Once this is done, save the settings and the FTP server will start automatically.

On your Linux box
With your FTP server running, open up terminal on your Linux box and type in the following:
lftp -u name -p 1234 ftp://192.168.x.xx:/
-u specifies the username
-p specifies the port which you setup earlier on the device

You should be prompted to enter a password which was the one you setup earlier. And if all goes well, you will have a line that looks like this:
lftp name@192.168.x.xx:/
And this verifies that you can make the connection. Now, if you type in 'ls', you should be given all the files and folders of that folder. Neat!

Syncing with rsync -> FAIL
Rsync is a program which appears to come pre-installed on Linux. It allows for syncing folders across FTP and on your machine. It seems to be much like creating a symlink but is only run when the user says so. There is a neat little thread here which says it all.

So...I've just finished trying to get rsync to work with my FTP server, but everywhere I go says that it cannot be done :(. Well at least I know about Rsync I guess. There is a solution however...read on.

LFTP and Mirror
So the solution, as posted here by easel (round of applause!), is to use 'mirror' to do the syncing over FTP using LFTP. So you'll want to make a script file with the code that easel has posted. I've changed mine a little for my needs:
lftp -c "set ftp:list-options -a;
open ftp://user:password@your.ftp.com:port; 
lcd ./web;
cd /web/public_html;
mirror --reverse --delete --use-cache --verbose --allow-chown --no-umask --parallel=2"
Some of the options used include:
lcd - is the local directory you want to sync,
cd - is the directory on the FTP server in which you wish to sync
and...
mirror - sync the two folders, choosing the local directory as the master (everything on cd that isn't in lcd will be deleted). Type 'man lftp' for more information on the options used here, --reverse, --delete etc.

Don't forget to change the permissions to user executable, chmod u+x.

Note: lftp uses ftp protocols to transfer data and is therefore, not secure (ssh is an example of a secure transfer protocol, need to investigate how I can use it instead). The password that was entered in earlier could easily be hacked (discovered) by anyone listening to the 'conversation' between your phone and your computer...so be careful!

Pretty neat huh! That's it. It may need to be run from the CLI, or you can create a launcher on the desktop. That should work.

Ciao and happy syncing!

Samba Server

My quick HowTo on how I setup Samba on my floor server. Methodology taken from here. Might also be useful, maybe even quicker, to look at Ubuntu Docs here.

1) Check samba and smbfs is installed
sudo apt-get install samba smbfs
It should be...

2) Edit the smb.conf file
Edit the file to your needs. Don't forget to make a copy of the original file somewhere just in case something goes terribly wrong.
sudo nano /etc/samba/smb.conf

2a) Tell Samba to allow only known accounts.
Find the following line...
#  security = user
Uncomment it and also add an extra line:
security = user
username map = /etc/samba/smbusers
Of course, you can tell Samba to point where ever you want it to, just don't forget where.

2b) Create Samba user account
Create an account that you will give access to Samba shares
sudo smbpasswd -a name
Add the user to smbuser file and link it to an account on the server
sudo nano /etc/samba/smbusers
Once the file is open, add the following line. The "name" will be linked to the name account:
name = “name”
Save and exit nano.

You have now setup a user for Samba. Now to finish it off - fatality style O_o.

3) Fine tune smb.conf

Now, if you read carefully, the smb.conf file will have a whole load of options. Some are straight forward, some are complex - in the sense that I don't know what would happen if I changed them. Everyone will have different preferences, but these should get you started:

Um...there's too many to list. See Ubuntu Docs, pretty useful.

But, to be helpful, if you want to get started quickly, just add this to the end of the smb.conf file, with your options of course.
[share]
    comment = Ubuntu File Server Share
    path = /srv/samba/share
    browsable = yes
    guest ok = no
    read only = no
    create mask = 0755

4) Restart services

I've also noticed that when doing this, there is some sort of lag in the service restart. So, you may need to wait a little longer before the changed settings will take effect.
sudo restart smbd
sudo restart nmbd

Conclusion
So I think that's it really. I've found that Samba just gives a UI for when accessing server files. It seems to be the same as using SSH and scp to move files to an fro. However, scp can be much faster if you know where you want to put something on the server and you have already set the right permissions. Which brings me to my next point: permissions. Are very important here. Permissions, users and groups. I almost gave myself a headache but I guess so long as you remember what users are in what groups, and which groups can do what. As a general rule, I've written it on stone-tablets in my mind that the user you create when you do a clean install, has admin/root permissions. Therefore, don't use that user name for this kind of stuff. So...if you are having troubles writing to shares once you have mounted them, then my advice is to look up some docs on permissions. Running this:
sudo chmod a+w directory
...may be all that is needed to allow you to do so.

Have just re-read this post and...it's longer than Ubuntu Docs :(. Oh well, enjoy!

Extra Places
http://us1.samba.org/samba/docs/man/manpages-3/smb.conf.5.html

How should a server be setup?

After recently acquiring a $75 tower, with reasonable specs, it has come to my attention that setting up a home server should have several steps that are a must. The last box died, in most part due to the age of the 15GB HDD. But Ubuntu was able to run and for a time, there was a nice little server running on it - 1 GHz, 256MB RAM (the 512 I put in wasn't detected...). After wiping the Windoze system and doing a clean install of Ubuntu 11.04 via network, I was able to get a LAMP installation running (but not utilising MySQL or PHP as these are both foreign to me), a Samba file sharing thingo, as well as installed and shared my Canon printer :).

Now, the guy I bought the $75 box from (3 GHz, 1GB RAM, 80GB HDD) suggested to use a USB stick to store important files. And while I do agree to some extent, I really just wanted to learn how to get a server going...hehehe. There may well be some extra benefits in the long run from having a file system that is accessible over the internet, however one does need to be weary of security issues and also the possibility that things might not do what you want them do - it's a learning process. With that in mind, I think Ubuntu Linux makes a very good distribution to get a server working.

Below, is a short list which is MY recommendations (and what I have done just recently) for what you should do/install after you have put an awesome distro of Linux on your box that has been sitting in the corner doing nothing for the last 4 years. My final message is as above: security is paramount. I'm not talking about physical security, although that may be important depending on which neighborhood you live in, but rather internet security. Call me paranoid, call me what you will, but there's no point setting up a server if other people are going to be able to get at it...unless you want them to of course.

Here's my list:
1) Configure sshd_config on the server so you can access it safely (in progress)
2) Setup printer
3) Move your web files to /var/www
4) Setup Samba

Here's my list - the detailed version
1) Configure sshd_config on the server so you can access it safely (in progress)
* Change the port in which SSH will listen. 22 is the default port so if you change it, unwanted guests will take longer to find your server.
* Disable root access. Allow only specific user/s, but never an account with root access.
* Setup SSH access via a public/private key. See Ubuntu Docs for more details and how to make one. Note: if after using this method and transferring the key to the server computer, you may find that the key fails to authenticate. This may be because you may not have added the ssh-server-address-thing (?) to your system. Run this command on the client computer (the computer in which you are accessing the server from), for some reason it is not mentioned in Ubuntu Docs:
ssh-add

2) Setup printer
* Make sure CUPS is up to date and all package dependencies have been met
* For my Canon PIXMA, see older posts
* If something isn't working, try turning the printer off and then on again, seems to do the trick

3) Move your web files to /var/www
* Maybe it's bad practise, but I've been getting into the habit of using /var/www as my LAMP root directory - it's the default directory anyway and I think it's safe
* If you wish to make directories that are accessible via internet, then you have to give the folder guest access. Just use chmod and chown to do this.
* I guess this is essentially how I set up my webserver. Not very flash bang and yes, update my website is one thing I need to do...

4) Setup Samba
* Only useful if you want to access your files over different OSes
* Or, if you don't like using SSH - which I kinda like...
* See next post for details

Things worth looking into:
* Logging break-in attempts: not sure how to do this but need an easier way than reading the /var/log/auth.log manually...

Tuesday, May 24, 2011

Print Server what tha?

Just about to go to bed. But probably worth remembering how I set up my print-server.

Essentially, just followed the HowTo here.

There may be a moment when you have appeared to have done everything right and have used CUPS to install your printer, but it still won't work. It is possible that there may be some unmet dependencies that have come about when you downloaded the drivers for your printer. All you may need to do is fix these dependencies:

sudo apt-get -f install

As per the HowTo I mentioned above, it is useful to install this driver:

sudo apt-get install cupsys-driver-gutenprint

Don't forget to restart cups when your done, otherwise won't get very far.

Troubleshooting
1) Giving the printer the wrong paper size may result in your printer doing nothing, not even telling you that it doesn't like the paper size.

Ciao

Sunday, May 8, 2011

Rkward and R CLI - Useful commands

Rkward (the GUI for the statistical package - R ) is a great FOSS program offered to those who choose to fly linux :). I've started using it again because it's that time of year again, where field report data needs testing. And yet again, it's time to read up on the R-project documentation, not to mention all the Rkward documentation...

There are many additional packages that are offered with Rkward, so many that the scroll bar moves about 20 lines with a small movement with the mouse. Crazy, I know. The R-project website is a very useful place and I picked up the complete manual and it is sitting on my desktop. It is about 3500 pages long, but very useable with a littler know how and the 'Find' function.

What may be unknown about R, is that much of it (the best parts) uses code. And what is good about that? Well I'm not sure yet, though I think I have an idea. I'll try and keep this post 'Open' and edit it with useful code for the R CLI. Enjoy!

Install Packages
For some reason, I always have trouble using the GUI to install packages. Workaround, use the CLI. Running rkward as sudo may also help, but not required.

> install.packages()
A window will come up and your package/s should be listed. It will download, and install the package in one go. Don't forget to load the package if you need to use it. In the R CLI, type:

> library(package.name)

Useful packages
So here is a list of the packages which I have installed on my system. These do not come installed with an R package:

-> car
-> hexbin
-> sm
-> reshape
-> sp
-> vegan
-> googleVis
-> rgl
-> zoo
-> chron
-> ggplot2
-> ecodist

NB: googleVis is an awesome package!!

To install all of the packages listed above in one go, execute:
> install.packages(c("car","hexbin","sm","reshape","sp","vegan","googleVis","rgl","zoo","chron","ggplot2","ecodist"))

Assigning data from data.table
Rkward comes with a neat way to handle data. Once Rkward is running, you can create a new data table to enter data in cell. This may not always be the best method, but this is how you store columns from that data table to a variable, ie x. This is so you can then perform various tasks comparing 2 or more of the columns of your dataset. On the CLI:

> x <- my.data[1]
> y <- my.data[2]

Unlike a typical computer language, R begins counting from 1 and so my.data[1] will be the first column, my.data[2] will be the second and so on. You can now call the column of data simply by typing the variable into CLI.

However, this method doesn't seem to work when you call the vectors for use in a function...

Scatterplot with regression line
These can be graphed from x,y dataset either by the GUI or the CLI. CLI is like this:

> plot(y ~ x, data=my.data)

Then to plot a fit line using the lm() method:

> abline(lm(y~x, data=my.data), col="red")

It will show up on the graph automatically.

Coefficients of scatterplot regression line (from above)
Print out the coefficients of the regression line:

> variable <- lm(y~x, data=my.data)

As per above, variable can be anything you want. Something more relevant is useful, ie. fit, coefs, interp etc.


Linux distribution
It came to my attention that R may very well run better on certain Linux distributions. I'm in the process of doing a clean install and am curious at this idea. I have opted for Ubuntu Server ed amd64 after being convinced that this package while being supplemented with R dependencies, is not bundled with process hog applications. I'll see how it pans out.

Emacs
So not only have I installed a light-weight yet powerful distro, I've found some useful information about using R with Emacs. Here's some helpful links to get you started:

1) Emacs commands/shortcuts - just a few of the >1000 available
2) Post by a user on how to load an .R file


Links and places to check out
http://www.statmethods.net/index.html
http://lmdvr.r-forge.r-project.org/figures/figures.htm

http://r.789695.n4.nabble.com/Best-64-bit-Linux-distro-for-R-td881882.html - a rather recent discussion is being held here about the best Linux distro to run R on

http://cran.r-project.org/bin/linux/ubuntu/ Good guide to CLI installation of R

Monday, April 25, 2011

Instant Swap_space


Just a quick one here. Need swap? Not sure why you would if you have an awesome computer, but here's the method anyway. I've just tried this, and my computer crashed. I'm guessing that I don't really need swap, although I was intrigued at the idea that "you need swap if you want to be able to Suspend/Hibernate your computer". Anyways...

This method can be found here.

1) Create an empty file - didn't know you could to that!:
sudo dd if=/dev/zero of=/swap_file bs=1M count=1000
count=1000 can be replaced with whatever filesize you wish. It has also been noted that there is an optimum ratio for RAM/Swap space so don't make it too big as it might just do the opposite of what you want it to do. of= is essentially the location and name of the file, of=/home/person/swap_file is another example.

2) Change permissions so the file is safe from others:
sudo chown root:root /swap_file
sudo chmod 600 /swap_file

3) Tell the computer that you want this file to be swap space:
sudo mkswap /swap_file

4) Turn it on - that's what she said!:
sudo swapon /swap_file

Next is the steps to make it run on bootup. I wouldn't advise this unless you have tested the swap first, and know that it is working nicely with your system.

5) Open up /etc/fstab:
sudo gedit /etc/fstab

6) And whack this on the end:
/swap_file       none            swap    sw              0       0

So I thought that this was a nifty way to add swap space, much easier than partitioning a HDD - wouldn't be so hard if I had some free space on my HDD...I'll test out and see how my system runs. See ya!

Tuesday, April 19, 2011

Installing LibreOffice on Ubuntu 10.10


Making the change from OpenOffice and can't find another word processing (WP) program? Why not try LibreOffice!

It comes as no surprise that LibreOffice was my immediate choice when the last WP program failed to do me any good. Maybe it's my fault. Maybe I was just too ignorant in my installation and forgot something. I guess I can always go back...

What I'd like to quickly outline is how to install LibreOffice using a package downloaded from their site. This is probably the preferred way to do it as the program is not found in the repository and/or repositories suggested (not sure why). So again, this is basically just some reference material for me and anyone who needs a quick HowTo on the topic of today.

1) Download package
So, and you've probably already done this, go to http://www.libreoffice.org/download/ and download the most recent package for your computer. I'm on Ubuntu 10.10 so I downloaded the "Linux x86 (deb)".

Before I start, I'd just like to make a reference to where I got my information from http://ubuntuforums.org/showthread.php?t=1585017. I'm actually reading it as I write this...

2) Remove OpenOffice
Right. So while the ~143M file downloads, remove OpenOffice. It can be done several ways - hopefully you are familiar with those. I actually ran into a wall trying to use apt-get, so instead I just opened up the Synaptic Package Manager and removed it that way.

3) Extract file
Once download is complete. Open up the directory where it is located (Nautilus or bash) and extract it. If using bash enter this to extract files - will be different if the file you downloaded wasn't .tar.gz:

sudo tar -zxvf filename

3a) Rename the extracted folder "libreoffice"
This is not required but simply saves typing later on. Bring up bash and type:

sudo mv filename libreoffice

4) Install.
In bash, just use dpkg to install the packages. And so you don't have to continually re-enter the command for every file, use the wildcard symbol:

sudo dpkg -i ~/Desktop/libreoffice/DEBS/*.deb 

Remember to check your pathway, most likely will be different. This step may take a while, I didn't time it, I just went to sleep.

5) Install the menu icons.
Lastly, if you open up the "libreoffice" folder and then click on "DEBS", there should be another folder name "desktop-integration". There will be one .deb file in here and you simply want to install that. Again, make sure pathways are correct - I'm susceptible to pathway mutations.

sudo dpkg -i ~/Desktop/libreoffice/DEBS/desktop-integration/libreoffice3.3-debian-menus_3.3-9526_all.deb 

Now go check your "Applications" menu and if everything was done right, it should be under "Office"...YAY!

So go play, frolic in the fields of LibreOffice and don't be afraid to venture further, or even Tweet comments, I'm sure they'd like to hear em.

Before I go, I want to emphasis again where I got my information from, thankyou scouser73 on Ubuntuforums.org, thread here. I'd also just like to give some experience on LibreOffice already.

I haven't actually typed anything up yet, but the Zotero extension comes pre-installed (unless it was fetch from my Download folder while I was installing it...). I've tried adding references and there has been no hiccups as yet. Problems I was having with OpenOffice-Write was some weird thing where, if my computer went into stand-by, when I came back and logged in, if I tried to type something into my document (which I usually leave open), then it would crash. Very annoying. I will be testing that today and hopefully it doesn't crash in the lecture hall! Enjoy and post me your experience.

Saturday, April 2, 2011

Wgetting


Lately, I haven't exactly been spending enough time on my AMD laptop, traversing the Brisbane landscape with my trusty Samsung Netbook - what a joy! Not knowing where a power-point is, isn't a major issue now. But unfortunately, I end up spending much of my time on my netbook even when I get home. And the downside is that I just don't have the same processing power to do some real computer stuff - sshing, reading Ubuntu Documents (on a 15" screen is the only way), and listening to the radio!

In this quick blog, I'm just going to demonstrate how to Wget some stuff such that you don't download the same thing twice. Wget is a useful tool which can be used to download files from the web without the use of a browser. If you want to get an idea of it's power, just watch 'Social Network'. Basically it can be used to download whole websites and the number of different files that a website will contain. For example, you can download the files that makeup the Google website - the logo, the html, any animations, any javascript files etc (though I probably wouldn't try download all of Google...).

Wget is a simple program to use and should come with your Ubuntu distribution, if not, just get it from the Ubuntu Software Centre.

Open terminal (I'm assuming you're using a linux distro) and go to the folder where you want to save the downloaded files. Then invoke Wget. Invoking is simple - program url:

Wget http://www.aurl.com

This is without the use of any options. To see the options available, just type:

Wget --help

Now, I won't be going through them all, but I'm just going to detail those that are useful to me.

If my friend has a folder on his server that contains a number of files which I will need for a university assignment, and I want to download that folder to my computer, then I will need to do a recursive download. But I don't want to download anything else on his server, just everything below the folder of choice. So I will need to do a recursive download with 'no parent' - Wget jargon. Do it like this:

Wget -r -np http://www.myfriendsurl.com

All files will be downloaded to the directory that the terminal is currently in and Wget will neatly put them in a folder called 'www.myfriendsurl.com'.

Having fun yet? But you're annoyed that the pathway that it is being saved to, is too long to type aren't you? Well there is a solution! --cut-dirs. That's right!

So, I'm guessing your friend is a space-freak and keeps their server nice and tidy by using some sort of naming hierarchy which doesn't use a date or data genre structure! And now you're download from a site which is longer than 60 characters and looks like http://www.thesaferhaven.com/home/themonolounge/hinduphilosophy/yoyo-sponge-cake. What a nightmare! Well, what you do, is use --cut-dir to rid the extra directory listings that you will never need. Using:

Wget -r -np --cut-dirs=4 http://www.thesaferhaven.com/home/themonolounge/hinduphilosophy/yoyo-sponge-cake

--cut-dirs=4 will cut 4 of the folders off the download saving you clicking time and your sanity.

Now, when you go to the directory where you downloaded all these files, instead of having to click www.thesaferhaven.com > home > themonolounge > hinduphilosophy > yoyo-sponge-cake, you can just click www.thesaferhaven.com, and you will be at the files which you are interested in.

The last option I'd like to tell you about is the 'no clobber' option. All it does is tell Wget that you already have one of the files (which it is trying to download) on your computer, and it will stop Wget from downloading it. Use it like this:

wget -r -np -nc http://www.myfriendsurl.com

Of course, you must be in the same directory otherwise Wget won't see the files!

Too easy! In all honesty, Wget is more useful to those who build websites and want to download whole directories in their entirety. Wget is commonly used to download websites and mirror them from a home server. Something I need to learn more about...

Hope you enjoy this instalment of my Linux knowledge and write back to me on what I should learn next!

Tuesday, March 8, 2011

Remote rhythmbox playing...teeheehee


This is awesome. I already have a very good use for this method as you will soon see...waking up my house mates by ssh-ing to my home computer and turning on Rhythmbox, and turning it on loud!

So I really just wanted to be able to play music from the terminal. Mainly, so I can do this on a remote computer via ssh. I thought it was a crazy idea and not possible, but how wrong I was. I soon realised also, that Rhythmbox probably isn't the most process efficient program to run, but hey, it was straight forward, and did what I wanted it to do so I'll stick with it for now.

This is a guide to running a program on a remote machine over ssh.

Firstly, you'll need to ssh to the desired computer. Many docs on how to do this. I used a variant of the following command with no -X option.

ssh me@192.168.1.1

Enter the password for 'me' on the host. Then, type the command below, I lost the link to where I found it sorry.

export DISPLAY=:0.0

When you press enter, nothing will happen. However, you have just given ssh somewhere to direct programs that require a GUI. This command is also used if you want to direct the program to display on the local computer, but instead of using 0.0, another option is used. This is a little more complicated though and some files need to be edited.

Ok, so to test that you can get a program to start-up, try xeyes.

xeyes

Much easier to know if it works if the computer you are ssh-ing to is right beside you. So now that it works, we want to get rhythmbox working. The command is:

rhythmbox-client

This opens rhythmbox and then allows users to send commands to the client via terminal. To find a list of options, run:

rhythmbox-client --help

Lastly, point your terminal to the folder where all your songs are located, and queue them like so:

rhythmbox-client --enqueue FOLDER-NAME

Vuola! Use the --play and --next options to get to the songs you want to play. Of course they have to be in the playlist to play.

I will probably find an easier method for playing songs soon, but hey, this works for now. The only problem I've run into thus far, is if you add a folder, it adds all the files that are in that folder, documents, images etc. Possible solution, run an ls with a grep option to select only song files. It should work, will try it soon.

Thanks and have fun waking up your neighbours!

Saturday, February 26, 2011

Setting up a server, the nanoTux way!

Good evening. I think I've built up enough oomph to attepmt AGAIN to setup a server. Using the nanoTux guide, http://nanotux.com/blog/the-ultimate-server/, I'm really hoping that it works and that I haven't gone insane by the time I reach the end of the HowTo. This guide was found on the Ubuntu Forums website in the Server Platforms area and seeing as it's had almost 27000 views, I'm sure it's the real McCoy.

For a complete HowTo, go to the address given above as this post will simply be making references to the code I use, and any workarounds that I implement. Begin! Shit...no coffee, plunger is missing...

To be continued...

Sunday, February 6, 2011

Just some useful stuff: bash networking commands


After all that clicking and scrolling for my last post, I ended up finding several other useful commands. Most of them are to do with networking and I'd just like to have them handy. Here goes.

Display total space and availability:
df -h

Display size of a folder, a couple of options here, may need to be sudo:
sudo du -hs /path/to/folder
sudo du -chks /path/to/folder

IP Scanning with range 192.168.1.1-192.168.1.254:
sudo nmap -sP 192.168.1.1-254

Scan operating system on target IP (cool!):
sudo nmap -O 192.168.1.3


...and that was all!

Ubuntu and Windows: admiring how they share


Up late again but I've struck some success. I've just found a neat little way to access a share folder located on a windows box on your current network. This will no doubt be a short How-To since it's 12:44AM and I didn't fall asleep till 3:30 this morning because it was so hot, great.

So, basically, you have a network, and because you are leet, you run a Ubuntu machine, but of course someone else will be running Windows (and others maybe even a Mac, God help me if I have to figure that out). If you have used Windows, then you will know that there is a somewhat simple way to setup folder sharing, but this requires you to be connected via a cable to the network. Not sure why.

To share a folder in Windows with the rest of the network, you just right-click on the folder you wish to share, then Properties > Sharing Tab > go down a bit. Then, you might see "Network Setup Wizard" or you'll see two check boxes:
1-"Share this folder on the network" and
2-"Allow network users to change my files".
If you see the wizard, then you need to follow it through and hopefully select everything correctly; there's a guide here at the MS site. For you (the leet user) to be able to access your mates'(clueless drongo) computer, both of these boxes must be ticked. And I'm sorry, but you can't do this without the owners consent...

Just to check if you or your friend have done the setup right, the shared folder will come up in Network Places (I think, not using Windows atm) under the workgroup that you chose in the wizard.

Ok, so the folder is being shared, how do I access it from Ubuntu? Well the simplest way is to just click there by going to Places > Network. It might take a while to load because it does some sort of search. The other way to access it is from the command line, useful for those who want to move stuff from a dead server box that doesn't have any USB ports! Lol...

I'm running Ubuntu 10.10 and I didn't have any dependency problems but that may be different on other distros. There are two packages, CIFS and smbfs which can be used to mount local machines to a place on your computer. Currently, it seems that they can only mount machines that have a wired connection. Maybe something to do with too many fingerprints...not sure.

Create a mount point on your computer first: I went with /mnt/test. But any will do.
sudo mkdir /mnt/test

Now, assuming that you know the IP address of your mates computer and have already obtained their username/password, run the following command for CIFS:
sudo mount -t cifs -o username=drongo,password=password //192.168.1.1/shared-folder /mnt/test

And for smbfs just remove the -t cifs:
sudo mount -o username=drongo,password=password //192.168.1.1/shared-folder /mnt/test

If they didn't use a password, just leave it out and when it prompts for one, just hit ENTER. I did run into one hiccup and I'll just voice that here. The IP address that you use has to be the address of the wired modem not the wireless modem, otherwise it won't work. That's it. I'm sure I've left some other important information out so please correct me if you find it.

Other than that, the folder should now be accessible at /mnt/test for you to copy, write, delete or even put stuff there. Maybe you're a nice friend and you want to give them some music? It's possible.

Don't forget to unmount!
sudo umount /mnt/test

Friday, January 28, 2011

Quick styling




Quick styling no embedding!

<style type="text/css">
pre.source-code {
  font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace;
  color: #000000;
  background-color: #eee;
  font-size: 12px;
  border: 1px dashed #999999;
  line-height: 16px;
  padding: 10px;
  overflow: auto;
  width: 100%
}
</style>

<pre class="source-code"><code>Quick styling no embedding!
</code></pre>

Quick styling embedded! 

Getting a VM to work...the hard way


What am I talking about? I still don't really understand the concept of a computer running inside a computer. And I don't mean a laptop running inside a desktop case! I'm talking about Virtual-Machines. Loading an OS from a recently acquired image and then making it run in the background of an already running OS. Sounds like I might need a faster computer. Or get some liquid N2 and just frost up the bottom of my laptop. At least my palms wouldn't get hot. It does sound like a bit much.

Kernel-based Virtual Manager looks like a decent package. A bare bones kind of software that hopefully won't weigh down the running system. Problem is that I don't know how to use it, and I think that it runs from a command-line, which is what I want. There is an ever-so-useful HowTo at there site, and I'm testing that the procedures do work as I type this. I should go to bed.

Install the packages below as a start.
sudo apt-get -y install kvm qemu bridge-utils uml-utilities
sudo modprobe kvm
sudo apt-get -y install libvirt-bin
sudo apt-get -y install virt-manager

The last one is meant to be some sort of GUI but I have no idea how to use it, yet. The modprobe line reloads KVM without having to restart your computer, even though I've restarted mine several times before even getting this far.

Next, is you'll want to create a disk image for the guest-image, or in other words, a place where the virtual machine will live/sleep. From what I gather, Qcow2 is some sort of tool which creates a standalone image from a cluster of memory which is then scanned by the header when it needs to be read. I don't know. It's a thing. Create one like this:

/path/on/computer/where/it/will/live create -f qcow2 imagename.img 10G

Qcow2 will do it's thing, the last paramater is the size of the image, 10G -- I haven't played around with this much. When you have finally downloaded your Free and Open Source Linux distro that you so desire, just go ahead and install it.

sudo qemu-system-x86_64 -hda imagename.img -cdrom /path/where/.iso/is -m 512

There is meant to be some sort of read error and the host os will be unable to determine if the system you are installing is 32 or 64 bit, so by assigning qemu-system-x86_64 as the pathway, no one gets confused. Although, it might actually be part of the command, not sure. The final -m tells qemu how much RAM you want to allocate to the VM. I'm finding out now that maybe I should have used -m 1024, 36mins57s remaining...uhg!

1:29:40AM - Still waiting...

1:34:46AM - Meanwhile, test that the printer works

1:49:05AM - Printer works, going to bed, check the install in the morning.

5:35:19AM - Still going, stupid WINS param...

8:54:24AM - Yay! It's working! I'm now running Debian 2.22.3

Lastly, when you need to start up the VM from scratch, it is a simple call of the OS file created and the image memory, or whatever you want to call it. As you will probably find out, it's much simpler to create a directory and store both of these files there, that's what I did.
qemu-system-x86_64 -hda /path/to/imagename.img

So that's it. The system seems to be working fine. I have somehow lost track of where the KVM functionality is taking place, as Qemu seems to be doing all the work. My understanding of KVM has been rewritten, again - I must have missed something.

Sunday, January 2, 2011

Creating an autorun script for when a specific USB device is plugged into your Computer

Much of my information came from this thread, very helpful: http://ubuntuforums.org/showthread.php?t=502864

Several steps are involved and several locations will be accessed heavily. These are:
/etc/udev/rules.d - where the .rules file will be kept for the computer to access when the USB device of choice is plugged in, this ultimately points to the script file below
/usr/local - or wherever your script file is located.
/home/... - any other places where script files that will be used are stored, python, C etc

/etc/udev/rules.d
A README file is available in this folder and it is helpful to read. You will need to create a file here which will be picked up by your computer and read when an action occurs.

The file must begin with two digits followed by a - and end with .rules, 70-filename.rules. It will contain the following script:

ACTION=="add", SUBSYSTEM=="usb", ATTRS{idVendor}=="0d49", ATTRS{idProduct}=="7350", RUN+="/usr/local/script_file.sh", SYMLINK+="my_device"

If you are lost, just take a look at the files in /etc/udev/rules.d and make a similar filename.

To make it work for a specific device, with the device plugged in, run lsusb in a terminal and find the device Vendor and Product numbers. For the Kodak Co. device below, Vendor is 040a and the Product is 0576:
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 040a:0576 Kodak Co.
Bus 002 Device 001: ID 0000:0000
Bus 001 Device 001: ID 0000:0000

The RUN+= is the location of your script file. It is very important that this file has user executable permission as well as for all other .py files etc that will be used by the script. To do this, go to the directory with the file and run:

$ sudo chmod u+x filename

and check the permissions with

$ ls -all

It should be: -rwxr--r--
These are the permissions that work for me, I'm not sure if this is across all platforms.
The location of the script file is not important, but make it an easy place to remember. SYMLINK+= is a feature I'm not familiar as yet, however it does allocate the device to the variable given to a spot in the /dev folder, /dev/Maxtor.
The ATTRS before {idVendor} and {idProduct} can be changed to SYSFS if the device is not being recognised once plugged in. This can only be tested by you and your product, although there may be better explainations about this.

The SUBSYSTEM variable can be left out completely, however it does give useful information to the computer. Other variables such as usb_device and pci... exist but usb is the one that works for me.

At first, it is advised to make the script file a simple copy code like a cp command of a file you know to a place you know. If after you plug in the device, you find that the file has been copied to the new destination, you can be sure that the .rules file has been successfully read. This is definately the hardest part of the setup of this 'autorun' script.

Before I wrap things up, I did want to add a work-around for the .sh file, the file pointed to by the .rules file. I had problems when plugging in my device because the .sh file would read first, and then any other code already on the computer would read last. This was a problem because I wanted to point to a folder on the device but the device hadn't been mounted yet. I could have mounted it myself but I ran into a dead end. An alternative was using the sleep command followed by my code, inside some curly braces {}, looking like this:
{

sleep 5
cp -ar /a/folder/on/my/computer /folder/on/device

} &

The {} tells the interpreter to read to code, but run it in the background. With the sleep command, there is a 5 second pause while the computer runs it's usual code of mounting the device. After the 5 seconds it now copies the folder to the location on the device.

If you have a know fix for mounting a device manually using the /dev location, I'd be happy to hear. I'm now using this method to auto update my Uni files on my external HD whenever I plug it in. This is my first How-To so enjoy, rate, comment.