Nexus one to be launched tomorrow

Ever since some Google employees tweeted about a Google Phone a.k.a NexusOne, the tech world is abuzz with rumors about the phone. Many established news source, possible with inside help, have announced that the new phone is designed by Google and manufactured by HTC. There are several videos of the phone in action on YouTube. GSMArena has posted technical specification of the phone. The phone is said to have 512 MB RAM and a 1GHz processor.

Hopefully, after almost a month of anticipation, the wait is finally getting over. Google has called for an “Android Press Gathering” tomorrow. Everyone believe that the new phone will be announced at the press meet.

Encouraged by Google’s regular practice, I checked the page and found that the page is different from the standard Google 404 page. Which suggests that there is something going on there. So keep checking the url as it may be official page of the Google phone, once it is released.

Update: Engadget has posted a review of Nexus One.

Saving wget file with different filename

Anyone who has worked with Linux must be familar with the wget utility. wget utility allows you to download files from a remote server using HTTP, HTTPS and FTP protocols. Downloading files using wget is as simple as

Where is the file to be downloaded.

By default wget saves the file with the same name as the fetched file. For example, the above command will save the file as my-photo.jpg in the current working directory. If a file already exists with the given name, the downloaded file will be named as my-photo.jpg.1, my-photo.jpg.2 etc until a non existent filename is found.

It is possible to explicitly specify a different name for the downloaded file. This is possible using the -O switch (--output-document also works, but I believe short is sweet). The new command is

wget -O photo.jpg
Where photo.jpg is the new filename.

But be careful while using the -O switch. If there is an existing file with the same name, it will be overwritten with the new file.

Copying multiple files simultaneously using scp utility

Happy New Year to all.

I have been using the Secure Copy (scp) utility for copying files between my local server and development server. Sometimes I have to copy more than one file. Previously I used to copy the files one at a time. This is very annoying, as you have to type the password every time you use the command . But it is possible to copy multiple files using scp, just like the copy (cp) utility.

When you have to copy multiple files to your remote server, the syntax is similar to the cp command.

scp file1.sql

Where file1.sql and are the files to be copied, joyce is the username, is the hostname and ~/upload is the destination directory on the remote server.

In order to download multiple files from the remote server, the command to be used is

scp"file1.log file2.log" ~/logs

Where file1.log and file2.log are the files to be downloaded and ~/logs is the destination directory on the local server. Notice the quotes around the filenames. This ensures that the filenames list is not parsed by the local shell and is passed to the remote shell. Similarly, when you want to download files using wildcards (*.php, files_?.log etc), you should enclose the name within quotes to ensure that the expansion is done by the remote server.

The -r option can be used to copy directories recursively.

scp -r ~/logs

This may not be a lifesaver tip and the time gained by this method may be small. After all, when a large number of files are to be transferred, I use FTP or tar my files and copy it. But at times when things go wrong, even this small gain can help.

Extract single table from a mysql dump

Update: I have written a  wrapper script for extracting single/multiple tables from a dumpfile. Now it is possible to extract tables with single command. Visit

The other day, while working with the MySQL database of one of my sites, I accidentally damaged one of the tables irrecoverably. Fortunately, I was using AutoMySQLBackup script to backup all my databases at 12 AM every day. To save time, I decided to import only the damaged table. But when I tried to open the .sql file created by the mysqldump program, I understood that it was not going to as easy as I thought. The dump file was over 100 MB in size and  none of my text editors allowed me to open a file of that size.

As usual, I approached Google for a solution and it introduced me to two different solutions – AWK (a programming language) and Sed (a unix utility). There is only a very slight difference between the two commands.

awk '/-- Table structure for table .tbl_first./,/-- Table structure for table .tbl_second./{print}' mydumpfile.sql > mytable.sql

sed -ne '/-- Table structure for table .tbl_first./,/-- Table structure for table .tbl_second./p' mydumpfile.sql > mytable.sql

Here tbl_first is the table I wanted to extract and tbl_second was the table below that. The above commands will search the file mydumpfile.sql and extract the text between the start string (— Table structure for table .tbl_first.) and end string (— Table structure for table .tbl_second.). The dots before and after the table name are wildcard character to match the engrave character, which has a special meaning in shell commands. The {print} option (p in sed) prints the extracted string, which is then redirected to the file mytable.sql.

But that didn’t solve my problem completely. I was not sure of the order of tables in the .sql file. This time  grep (another powerful unix utility) came to my rescue. The following command lists all the tables in the file mydumpfile.sql in the same order in which they appear in the file.

grep 'Table structure' mydumpfile.sql | cut -d'`' -f2

I don’t know a lot about shell commands. But with my very limited experience I can say that they are extremely powerful. Two small lines of code saved me a lot of time.

Back after a long time

Today while visiting DigitalPoint I noticed that my last post was more than an year back. Hmm…That is a pretty long time. Many things happened during this one year. Now I am no longer a student, but runs my own web development firm with my brother. The new role brought with it new responsibilities too. So I could never find time for blogging, even though I really wanted to. Anyways, today I decided to spend some time to upgrade my blog, change its design and write this update.

I am so sorry for not updating Local Analytics for a long time. I heard that it does not work with newer versions of WordPress. I can’t make a promise, but I will try to update it when I have time.

Thats all for now. Have a great day 🙂

Google being reported as fraud site by Opera

It has been almost a month since my last post. I will be extremely busy till the end of this month. The new year has been going great so far. All my exams, except one, were easy. Hope you all had a great Christmas and New Year (Yes, I know it is a bit too late 🙂 ).

Today early morning, my brother rushed in to my room and told me that our latest site,, is being reported as fraud site by Opera. I ran to my computer and found that it was true. I was surprised because the site is not yet complete and still it is being reported as a fraud site. I reloaded the page and it was then I noticed that it was not actually our site, but Google’s cache of my site that my brother was referring to. I checked my site and found that it was not showing any warnings. Next I checked for Google’s cache of and found that it too showed the same error. But the warning is not being shown on the datacenter homepage. I checked the page cache in some other datacenters [,] and they were not showing any such warning.

BTW, I had received several feedbacks and comments on Local Analytics. I am sorry, I wasn’t able to respond to your comments and fix the errors you have reported. I will try my best to fix those errors by the end of this month or early February.

Google reported as fraud site by Opera Site Check Google reported as fraud site by Opera Site Check

Merry Christmas

Today, on this beautiful day, I would like to wish all my readers a Merry Christmas. Also as this year come to an end, I want to thank all my readers for your support for my blog and my plugin. It has been more than 6 months since I started blogging. Compared to some other blogs that started at the same time, this blog did not grow much in terms of visitors or RSS subscribers. But I am content with the current standing.

Achievements in 2007

Once again thanks to all my readers. I will try to be a better blogger in 2008. And till then, happy holidays.

Local Analytics v1.2

Sorry for being a little late. As promised, I am releasing the latest version of Local Analytics today. Thanks to Carl and DG, this version is compatible with the latest update to code update to Google Analytics. This version also includes Adsense and YPN ad click tracking. The ad click tracker is based on the Free AdSense Tracker by Aaron Wall.

BTW, I have obtained access to the SubVersion repository for Local Analytics. From now on, you can download the current version and the previous versions of the plugin from the WP Plugin Repository.

I have also fixed a mistake in the version number pattern. The last version was renamed to v1.1.3 from v1.13 to prevent conflicts in future.

Change Log

  • Included compatibility with latest Google Analytics Code Update
  • Added support for Adsense and YPN ad click tracking
  • Changed the version number pattern

Local Analytics Update

Sorry, this is not a post announcing the release of the latest version of Local Analytics. I wanted to update the plugin to the latest ga.js from urchin.js yesterday. Unfortunately, I couldn’t turn on my computer yesterday due to power failure. I have my first exam tomorrow and I’ll try to release the updated version after that.

Thanks to Carl and DG for notifying me about the update to Google Analytics.

Secret Classroom Blog Contest

When posting my previous post, I never thought that I will be writing another post so soon. But on seeing the prize, I couldn’t keep myself from entering Thor Schrock’s Secret Classroom Blog Contest. The prizes are

  1. USD 1000 via Paypal
  2. A Copy of Joel Comm’s Secret Classroom DVD Set
  3. A 22″ Wide screen Acer LCD Computer Display

What is Secret Classroom?

Secret Classroom is an internet marketing course in a set of 12 DVDs featuring all the 12 episodes of reality show, The Next Internet Millionaire, with 2 hour sessions on internet marketing by experts like Jeff Walker, Rich Schefren, Ray Edwards, Mark Joyner etc. These experts teaches a handful of aspiring entrepreneurs, the secrets of successful internet marketing, as if in a classroom. This much awaited course was released on December 5th and became an instant hit among internet marketers, both amateurs and professionals. The Secret Classroom is not a get rich quick gimmick. It is an education that will improve your income and your internet marketing business for the rest of your life.

I would have really loved to have a copy of the secret classroom. Unfortunately, the prize of the set is unaffordable for me right now. But I will surely get a copy when I can afford one. Have any of you bought a copy? If so, can you please tell your opinion on this internet marketing course?

So stop thinking  and enter the Secret Classroom Blog Contest at the earliest. Remember that time and tide waits for none.