Goals 2015

Today I have added some of the goals for this year(2015) but this time very serious about my goals and plans.I am planning to add the goals in my blog post and try to make this happen in the following months So that I can retrospect my progress.

Goals 2015
  1. Concentrate on some open source projects or create new ideas and make it as open source projects.
    1. Improve my coding skills and able to see different coding standards.
    2. Learning new technologies/languages
    3. Understanding the design patterns which they have used in their code
  2. Book Reading – At least one book per month..It can be anything Professional/Novels or anything.
  3. Arduino/Netduino/Rasperry Pi /Android – This was my long back dream but I couldn’t make it. This year will add some home automation with these micro processors.
  4. Blog Writing – Now I am doing it here … Its having the limit of at least 50 this year.
  5. Photography – This tooo long back dream not able to make it last year and before…
  6. Reduce weight – Target is 73 to 75… some of the sub goals
    1. Daily do some exercise and yoga
    2. Wakeup early / Early to bed (Today already late …)
Apart from these some personal goals are there… Will have to make it this year (2015).
I will update the progress here one by one.
Happy New Year (2015) Friends… Lets make it happen this time Smile.


Hi Friends, Did you hear anything about this robots.txt file .This file is important for web development.The Robot Exclusion Standard, also known as the Robots Exclusion Protocol or robots.txt protocolis used to prevent files from web crawlers or Robots or Web Spiders .Web Crawlers ..What is this ? What is the purpose of this ? The answer is Web Crawlers areused(frequently used in search engines like Google ) to get data from World Wide Web .How Search engines work ??Alt textAlt Text 1. Web Spiders(Google) are getting the data from the World Wide Web and organizing the datain to their databases based on Meta Tags (this thing has been assigned for Index of this page).For example the web spiders get the meta tag as index(asp) and mapped to Asp.Net. 2. when we are searching the word ‘asp’ in Google , it searches the database(Google's Database-It was already filled by using Webcrawlers ) by using the keyword ‘asp’ and get theresults(in the order of maximum hit counts(Rank)) like Alt Text Now i am coming to that point robots.txt , this WebCrawler will get all datas from our web server.So we have to stop the web crawler to getting our personal datas or something you needto hide(example: login page) from our webserver, this is the time we need this Robot Exclusion Standard(robots.txt).Step 1: You can add the pages(need for hide ) one by one in the robots.txt file.Step 2: Upload this file to your Web Server.That's It…

This example allows all robots to visit all files because the wildcard "*" specifies all robots:

User-agent: *
This example keeps all robots out:
User-agent: *
Disallow: /

References: 1. Robots.txt 2. How Search Engine Works Examples: 1.Google robots.txt 2.Wikipedia robots.txt