Sejarah Google

Beginning
Google began in March 1995 as a research project by Larry Page and Sergey Brin, Ph.D. students at Stanford University.

In search of a dissertation theme, Page had been considering—among other things—exploring the mathematical properties of the World Wide Web, understanding its link structure as a huge graph.[3] His supervisor, Terry Winograd, encouraged him to pick this idea (which Page later recalled as "the best advice I ever got") and Page focused on the problem of finding out which web pages link to a given page, based on the consideration that the number and nature of such backlinks was valuable information for an analysis of that page (with the role of citations in academic publishing in mind).

In his research project, nicknamed "BackRub", Page was soon joined by Brin, who was supported by a National Science Foundation Graduate Fellowship.Brin was already a close friend, whom Page had first met in the summer of 1995—Page was part of a group of potential new students that Brin had volunteered to show around the campus. Both Brin and Page were working on the Stanford Digital Library Project (SDLP). The SDLP's goal was “to develop the enabling technologies for a single, integrated and universal digital library" and it was funded through the National Science Foundation, among other federal agencies.

Page's web crawler began exploring the web in March 1996, with Page's own Stanford home page serving as the only starting point.To convert the backlink data that it gathered for a given web page into a measure of importance, Brin and Page developed the PageRank algorithm.[3] While analyzing BackRub's output—which, for a given URL, consisted of a list of backlinks ranked by importance—the pair realized that a search engine based on PageRank would produce better results than existing techniques (existing search engines at the time essentially ranked results according to how many times the search term appeared on a page).

A small search engine called "RankDex" from IDD Information Services (a subsidiary of Dow Jones) designed by Robin Li was, since 1996, already exploring a similar strategy for site-scoring and page ranking. The technology in RankDex was patented and used later when Li founded Baidu in China.

Convinced that the pages with the most links to them from other highly relevant Web pages must be the most relevant pages associated with the search, Page and Brin tested their thesis as part of their studies, and laid the foundation for their search engine.:

Some Rough Statistics (from August 29th, 1996)
Total indexable HTML urls: 75.2306 Million
Total content downloaded: 207.022 gigabytes
...
BackRub is written in Java and Python and runs on several Sun Ultras and Intel Pentiums running Linux. The primary database is kept on a Sun Ultra II with 28GB of disk. Scott Hassan and Alan Steremberg have provided a great deal of very talented implementation help. Sergey Brin has also been very involved and deserves many thanks.

Indonesian
Siapa yang tak kenal dengan Google. Bagi semua peselancar dunia cyber pasti sudah sangat akrab dengan Om Google ini. Google sangat terkenal dengan mesin pencari di rimba belantara dunia maya. Yang perlu dilakukan hanyalah mengetik kata sandi yang diinginkan maka Om Google akan melacak dan mencari informasi apapun yang diinginkan.

Google dengan dua “o” pun unik, sebab jika data hasil pencarian ditemukan, jumlah “o” akan muncul sebanyak Web yang didapat oleh mesin pencari.

Kata Google berasal dari kata Googlo. Kata itu diciptakan oleh Milton Sirotta, Ponakan Edward Kasner seorang ahli Matematika dari AS. Sirotta membuat istilah Googlo untuk menyebutkan angka 1 (satu) yang diikuti 100 angka 0 (nol), Oleh karena itu penggunaan kata Google merupakan Refleksi dari kata Googlo.

Tapi tahukan Anda, Google tak hanya unik dari asal katanya. Google pun memiliki latar belakang sejarah yang unik. Google lahir dari sebuah pertemuan dua pemuda yang terjadi secara tidak sengaja pada tahun 1995 lalu. Larry Page, alumnus Universitas Michigan (24), yang sedang menikmati kunjungan akhir pekan, tanpa sengaja dipertemukan dengan Sergey Brin, salah seorang murid (23) yang mendapat tugas mengantar keliling Lary.

Dalam pertemuan tanpa sengaja tadi, dua pendiri Google tersebut sering terlibat diskusi panjang. Keduanya memiliki pendapat dan pandangan yang berbeda sehingga sering terlibat perdebatan. Namun, perbedaan pemikiran mereka justru menghasilkan sebuah pendekatan unik dalam menyelesaikan salah satu tantangan terbesar pada dunia komputer. Yakni, masalah bagaimana memperoleh kembali data dari set data masif.

Pada Januari 1996, Larry dan Sergey mulai melakukan kolaborasi dalam pembuatan search engine yang diberi nama BackRub. Setahun kemudian pendekatan unik mereka tentang analisis jaringan mengangkat reputasi BackRub. Kabar mengenai teknik baru mesin pencari langsung menyebar ke penjuru kampus.
Larry dan Sergey terus menyempurnakan teknologi Google sepanjang awal 1998. Keduanya juga mulai mencari investor untuk mengembangkan kecanggihan teknologi Google.

Gayung pun bersambut. Mereka mendapat suntikan dana dari teman kampus, Andy Bechtolsheim, yang merupakan pendiri Sun Microsystems. Pertemuana mereka terjadi pada pagi buta di serambi asrama mahasiswa fakultas Stanford, di Palo Alto. Larry dan Sergey memberikan demo secara singkat karena Andy tak memiliki waktu yang cukup lama.

Melalui demo itu Andy setuju untuk memberikan bantuan dana berupa sebuah cek senilai 100 ribu dolar AS. Sayangnya, cek itu tertulis atas nama perusahaan Google. Padahal saat itu perusahaan bernama Google belum didirikan oleh Sergey dan Larry.

Investasi dari Andy menjadi sebuah dilema. Larry dan Sergey tak mungkin menyairkan cek selama belum ada lembaga legal yang bernama perusahaan Google. Karena itu, dua pendiri Google ini kembali bekerja keras dalam mencari investasi. Mereka mencari pendana dari kalangan keluarga, teman, dan sejawat hingga akhirnya terkumpul dana sekitar 1 juta dolar. Dan akhirnya, perusahaan Google pun dapat didirikan pada 7 Septembar 1998 dan dibuka secara resmi di Menlo Park, California.

Misi Google adalah, “untuk mengumpulkan informasi dunia dan menjadikannya dapat diakses secara universal dan berguna.” Filosofi Google meliputi slogan seperti “Don`t be evil”, dan “Kerja harusnya menatang dan tantangan itu harusnya menyenangkan”, menggambarkan budaya perusahaan yang santai.

Saat ini Google merupakan sebuah perusahaan berpredikat nomor satu dalam top 100 perusahaan yang paling diminati di Amerika, dengan jumlah pegawai sekitar 10 ribu orang.

Google Inc. adalah sebuah perusahaan multinasional Amerika Serikat yang berkekhususan pada jasa dan produk Internet. Produk-produk tersebut meliputi teknologi pencarian, komputasi web, perangkat lunak, dan periklanan daring. Sebagian besar labanya berasal dari AdWords.

Google didirikan oleh Larry Page dan Sergey Brin saat masih mahasiswa Ph.D. di Universitas Stanford. Mereka berdua memegang 16 persen saham perusahaan. Mereka menjadikan Google perusahaan swasta pada tanggal 4 September 1998. Pernyataan misinya adalah "mengumpulkan informasi dunia dan membuatnya dapat diakses dan bermanfaat oleh semua orang", dan slogan tidak resminya adalah "Don't be evil". Pada tahun 2006, kantor pusat Google pindah ke Mountain View, California.

Sejak didirikan, pertumbuhan perusahaan yang cepat telah menghasilkan berbagai produk, akuisisi, dan kerja sama di bidang mesin pencari inti Google. Perusahaan ini menawarkan perangkat lunak produktivitas daring (dalam jaringan), termasuk surat elektronik (surel), paket aplikasi perkantoran, dan jejaring sosial. Produk-produk komputer mejanya meliputi aplikasi untuk menjelajah web, mengatur dan menyunting foto, dan pesan instan. Perusahaan ini memprakarsai pengembangan sistem operasi Android untuk telepon genggam dan Google Chrome OS untuk jajaran netbook Chromebook. Google sudah beralih ke perangkat keras komunikasi. Mereka bekerja sama dengan berbagai produsen elektronik besar untuk memproduksi perangkat Nexus-nya dan mengakuisisi Motorola Mobility pada Mei 2012. Tahun 2012, infrastruktur serat optik dipasang di Kansas untuk memfasilitasi layanan Internet pita lebar Google Fiber.

Perusahaan ini diperkirakan mengoperasikan lebih dari satu juta server di beberapa pusat data di seluruh dunia dan memproses lebih dari satu miliar kueri pencarian dan sekitar 24 petabita data buatan pengguna setiap harinya. Pada bulan Desember 2012, Alexa menyebut google.com sebagai situs web paling banyak dikunjungi di dunia. Situs-situs Google dalam bahasa lain masuk peringkat 100 teratas, sebagaimana halnya situs milik Google seperti YouTube dan Blogger. Google menempati peringkat kedua di basis data ekuitas merek BrandZ. Dominasi pasarnya menuai kritik mengenai hak cipta, penyensoran, dan privasi.. Pada tahun 2014, Google juga mendapat penghargaaan dari Business Insider sebagai perusahaan yang memiliki merk paling bernilai.

Perbedaan Hacker & Cracker

What are crackers and hackers?
A cracker (also known as a black hat hacker) is an individual with extensive computer knowledge whose purpose is to breach or bypass internet security or gain access to software without paying royalties. The general view is that, while hackers build things, crackers break things. Cracker is the name given to hackers who break into computers for criminal gain; whereas, hackers can also be internet security experts hired to find vulnerabilities in systems. These hackers are also known as white hat hackers.  Crackers’ motivations can range from profit, a cause they believe in, general maliciousness or just because they like the challenge. They may steal credit card numbers, leave viruses, destroy files or collect personal information to sell. Crackers can also refer to those who reverse engineer software and modify it for their own amusement.  The most common way crackers gain access to networks or systems is through social engineering, whereby the cracker contacts employees at a company and tricks them into divulging passwords and other information that allows a cracker to gain access.

>>Hacker adalah sebutan untuk orang atau sekelompok orang yang memberikan sumbangan bermanfaat untuk dunia jaringan dan sistem operasi, membuat program bantuan untuk dunia jaringan dan komputer. Hacker juga bisa di kategorikan perkerjaan yang dilakukan untuk mencari kelemahan suatu system dan memberikan ide atau pendapat yang bisa memperbaiki kelemahan system yang di temukannya.

Terminologi peretas muncul pada awal tahun 1960-an diantara para anggota organisasi mahasiswa Tech Model Railroad Club di Laboratorium Kecerdasan Artifisial Massachusetts Institute of Technology (MIT). Kelompok mahasiswa tersebut merupakan salah satu perintis perkembangan teknologi komputer dan mereka berkutat dengan sejumlah komputer mainframe. Kata bahasa Inggris “hacker” pertama kalinya muncul dengan arti positif untuk menyebut seorang anggota yang memiliki keahlian dalam bidang komputer dan mampu membuat program komputer yang lebih baik daripada yang telah dirancang bersama.Kemudian pada tahun 1983, istilah hacker mulai berkonotasi negatif. Pasalnya, pada tahun tersebut untuk pertama kalinya FBI menangkap kelompok kriminal komputer The 414s yang berbasis di Milwaukee, Amerika Serikat. 414 merupakan kode area lokal mereka. Kelompok yang kemudian disebut hacker tersebut dinyatakan bersalah atas pembobolan 60 buah komputer, dari komputer milik Pusat Kanker Memorial Sloan-Kettering hingga komputer milik Laboratorium Nasional Los Alamos. Satu dari pelaku tersebut mendapatkan kekebalan karena testimonialnya, sedangkan 5 pelaku lainnya mendapatkan hukuman masa percobaan.

Kemudian pada perkembangan selanjutnya muncul kelompok lain yang menyebut-nyebut diri sebagai peretas, padahal bukan. Mereka ini (terutama para pria dewasa) yang mendapat kepuasan lewat membobol komputer dan mengakali telepon (phreaking). Peretas sejati menyebut orang-orang ini cracker dan tidak suka bergaul dengan mereka. Peretas sejati memandang cracker sebagai orang malas, tidak bertanggung jawab, dan tidak terlalu cerdas. Peretas sejati tidak setuju jika dikatakan bahwa dengan menerobos keamanan seseorang telah menjadi peretas.
Para peretas mengadakan pertemuan tahunan, yaitu setiap pertengahan bulan Juli di Las Vegas. Ajang pertemuan peretas terbesar di dunia tersebut dinamakan Def Con. Acara Def Con tersebut lebih kepada ajang pertukaran informasi dan teknologi yang berkaitan dengan aktivitas peretasan.

Peretas memiliki konotasi negatif karena kesalahpahaman masyarakat akan perbedaan istilah tentang hackerdan cracker. Banyak orang memahami bahwa peretaslah yang mengakibatkan kerugian pihak tertentu seperti mengubah tampilan suatu situs web (defacing), menyisipkan kode-kode virus, dan lain-lain, padahal mereka adalah cracker. Cracker-lah menggunakan celah-celah keamanan yang belum diperbaiki oleh pembuat perangkat lunak (bug) untuk menyusup dan merusak suatu sistem. Atas alasan ini biasanya para peretasdipahami dibagi menjadi dua golongan: White Hat Hackers, yakni hacker yang sebenarnya dan cracker yang sering disebut dengan istilah Black Hat Hackers.

>>cracker adalah sebutan untuk orang yang mencari kelemahan system dan memasukinya untuk kepentingan pribadi dan mencari keuntungan dari system yang dimasuki seperti: pencurian data, penghapusan, dan banyak yang lainnya.
  

>>Hacker :
1. Mempunyai kemampuan menganalisa kelemahan suatu sistem atau situs. Sebagai contoh : jika seorang hacker mencoba menguji suatu situs dipastikan isi situs tersebut tak akan berantakan dan mengganggu yang lain. Biasanya hacker melaporkan kejadian ini untuk diperbaiki menjadi sempurna. Bahkan seorang hacker akan memberikan masukan dan saran yang bisa memperbaiki kebobolan system yang ia masuki.
2. Hacker mempunyai etika serta kreatif dalam merancang suatu program yang berguna bagi siapa saja.
3. Seorang Hacker tidak pelit membagi ilmunya kepada orang-orang yang serius atas nama ilmu pengetahuan dan kebaikan.
4. Seorang hacker akan selalu memperdalam ilmunya dan memperbanyak pemahaman tentang sistem operasi.

>>Cracker :
1. Mampu membuat suatu program bagi kepentingan dirinya sendiri dan bersifat destruktif atau merusak dan menjadikannya suatu keuntungan. Sebagai
contoh : Virus, Pencurian Kartu Kredit, Kode Warez, Pembobolan Rekening Bank, Pencurian Password E-mail/Web Server.
2. Bisa berdiri sendiri atau berkelompok dalam bertindak.
3. Mempunyai website atau channel dalam IRC yang tersembunyi,
hanya orang-orang tertentu yang bisa mengaksesnya.
4. Mempunyai IP address yang tidak bisa dilacak.
5. Kasus yang paling sering ialah Carding yaitu Pencurian Kartu
Kredit, kemudian pembobolan situs dan mengubah segala isinya menjadi berantakan. Sebagai contoh : Yahoo! pernah mengalami kejadian seperti ini sehingga tidak bisa diakses dalam waktu yang lama, kasus klikBCA.com yang paling hangat dibicarakan beberapa waktu yang lalu.

>>Ada beberapa jenis kegiatan hacking, diantaranya adalah: Social Hacking, yang perlu diketahui : informasi tentang system apa yang dipergunakan oleh server, siapa pemilik server, siapa Admin yang mengelola server, koneksi yang dipergunakan jenis apa lalu bagaimana server itu tersambung internet, mempergunakan koneksi siapa lalu informasi apa saja yang disediakan oleh server tersebut, apakah server tersebut juga tersambung dengan LAN di sebuah organisasi dan informasi lainnya.

>>Technical Hacking, merupakan tindakan teknis untuk melakukan penyusupan ke dalam system, baik dengan alat bantu (tool) atau dengan mempergunakan fasilitas system itu sendiri yang dipergunakan untuk menyerang kelemahan (lubang keamanan) yang terdapat dalam system atau service. Inti dari kegiatan ini adalah mendapatkan akses penuh kedalam system dengan cara apapun dan bagaimana pun.

>>kesimpulannya bahwa Hacker yang ‘baik’ adalah orang yang
mengetahui apa yang
dilakukannya, menyadari seluruh
akibat dari apa yang
dilakukannya, dan bertanggung jawab atas apa yang dilakukannya.
Sementara hacker
yang ‘jahat’ atau biasa disebut cracker adalah orang yang tahu apa yang dikerjakannya, tetapi seringkali tidak menyadari akibat dari perbuatannya. Dan ia tidak mau bertanggung jawab atas apa yang telah diketahui dan dilakukannya itu. Karena hacker adalah orang yang tahu dalam ketahuannya, di dunia hackers tentu saja ada etika yang mesti dipenuhi dan dipatuhi bersama.
Lebih jauh lagi tentang Cracker, 

>>Cracker adalah seseorang yang berusaha untuk menembus sistem komputer orang lain atau menerobos sistem keamanan komputer orang lain untuk mengeruk keuntungan atau melakukan tindak kejahatan.

Apa bedanya antara hacker dan cracker ?
– Perbedaannya sangat tipis, hanya karena satu alasan saja, seorang hacker bisa menjadi cracker dan melakukan tindakan pengerusakan. atau seorang cracker bisa juga menjadi hacker.
Sekedar contoh saja, di awal tahun 2000 an, ketika Cracker dari Italia menyusup ke komputer portugis, lalu dari komputer negara Portugis mereka melakukan serangan ke Indonesia. 3 orang hacker Indonesia yang mengetahui serangan ini, melakukan tindakan balasan.
Singkat cerita 3 orang ini berhasil menghilangkan domain yang berakhiran .pt , sehingga situs portugis yang berakhiran .pt tidak bisa di akses. untuk indonesia domain ID nya adalah co.id , sedangkan portugis .pt.
Pembicaraan pun di gelar di komunitas hacker portugis, setelah di lakukan penyelidikan, baru di ketahui asal serangan tersebut berasal dari Negara Italia.
Anda lihat bukan ?, Melakukan tindakan balasan, artinya melakukan serangan, melakukan serangan artinya melakukan pengerusakan, yang melakukan pengerusakan di sebut cracker.
Atau sebut saja seseorang yang menggunakan nickname Tarjo, hanya karena australia menduduki timor-timor, sebagai bentuk protes, lebih 1000 situs Australia dalam waktu 1 malam di rusak, dan dalam 1 malam juga Tarjo berubah gelar menjadi Cracker.Yang terbaru adalah kasus dengan malaysia ( ambalat ), banyak juga hacker yang dalam 1 malam berganti gelar menjadi cracker,dengan melakukan tindakan pengerusakan di situs-situs malaysia.

>Prinsip kerja hacker dan cracker sebenarnya sama. Yang membedakan keduanya adalah tujuannya. Dari segi kemampuan, cracker dan hacker juga tidak jauh berbeda.
Tapi cracker seringkali memiliki ilmu yang lebih oke dan keberanian serta kenekatan yang lebih besar daripada hacker.

Namun dari segi mentalitas dan integritas, keduanya beda jauh.
Ok, semoga penjelasan diatas bisa bermanfaat dan menambah pengetahuan Anda agar bisa membedakan mana yang baik dan mana yang jahat.


Penjelasa Proxy

Apa itu Proxy?

Proxy yang dimaksud di sini bukanlah nama bintang di langit, bukan nama pusat penjualan HP terbesar di Jakarta, dan bukan juga merek pakaian dalam wanita. Proxy yang dimaksud di sini adalah… berikut ilustrasinya :

Setiap jaringan internet kita kan ada alamat IPnya. Nah IP tersebut itu asalnya dari server ISP (provider internet) yang kita gunakan. Jadi kalau internet kita lambat karena mungkin bandwithnya dibatasi oleh ISP tersebut, maka gunakan IP dari proxy! Tugas proxy adalah sebagai agen Shadow Clone (bukan Kage Bunshin no jutsu lho!) untuk menggantikan IP server ISP kita berselancar di dunia internet. Nah karena tugas proxy tersebut sebagai IP bayangan, makanya kita bisa mengakses server yang kita tuju tanpa harus dibatasi bandwith dari server ISP kita. Karena proxy adalah perantara antara komputer kita dengan server yang kita tuju tanpa harus mengakui keberadaan ISP kita. Jadi IP kita yang nongol di sana IP proxy tersebut.

Secara teknis, proxy server adalah sebuah sistem yang berperan sebagai perantara antara client hosts, dengan server yang ingin diaksesnya. Secara tradisional, kite bisa menyamakan proxy sebagai mak comblang, makelar, calo, affiliasi, orang ketiga, perantara, broker, ato beberapa istilah sejenis lainnya. Bagaimana dengan cara kerjanya?

Contohnya begini, katakanlah kamu ingin membuka Google di internet. Berarti komputer mu disebut dengan client host. Dan halaman web Google yang kamu lihat di monitor, adalah file html yang tersimpan di dalam komputer server Google yang ada di internet. Itu artinya komputer mu (client) ingin mengakses server Google. Lalu?

Jika tanpa proxy, maka saat mengetikkan alamat URL dari Google (http://www.google.co.id), maka dengan seketika itu juga, dan dengan tanpa perantara, komputer mu akan mengirim request atau permintaan mu itu langsung ke komputer servernya Google. Saat menerima request mu itu, maka dengan tanpa perantara, komputer server Google akan mengirimkan balasan berupa halaman web yang kamu lihat di monitor. Lalu kalo pake proxy?

Sedangkan kalo kite pake proxy, maka, saat mengetikkan alamat URL dari Google, seketika itu juga komputer mu akan mengirimkan request, tapi tidak langsung ke komputer servernya Google. Melainkan ke komputer yang berperan sebagai proxy server. Dan saat menerima request mu itu, server proxy kemudian melanjutkannya ke komputer servernya Google. Dan sebaliknya, dari server Google ke server proxy, baru ke komputer mu.

Jadi, sudah mengertikah dikau cara kerja proxy ini? Simple dan mudah bukan? Lalu, dikau bertanya lagi, buat apa pake perantara (proxy) kalo bisa transaksi secara langsung? Dan, bukankah biasanya kalo pake pihak ketiga (proxy) itu biasanya malah merugikan, lebih panjang, lama, dan ribet, betul?

Memang benar. Pake proxy itu memang ada ruginya, tapi juga ada untungnya. Dan ingat bung, untung dan rugi itu sudah menjadi hukum Alam, bukan Vetivera. Kata bapak, dimana ada si untung, disitu pasti ada si rudi... eh... rugi. Jadi, si untung tidak akan pernah ada, jika tanpa si rugi. Begitupun sebaliknya. Itu artinya, proxy tidak akan pernah dibuat dan dipake orang, kalo cuma bisa merugikan, betul?

apa untungnya pake proxy?

Untuk mengetahuinya, bisa kita lihat dari tujuan awal proxy dibuat. Proxy dibuat dengan tujuan untuk mengambil, membawa, dan menyampaikan, lalu mengembalikan setiap request dari client, kemudian menyimpan request tersebut kedalam suatu tempat yang dinamakan cache.

Apa itu cache? Dan, buat apa proxy menyimpan request tersebut kedalam cache?

Secara sederhana, cache bisa kita samakan sebagai tempat penyimpanan sementara, atau Google sering menyebutnya sebagai tembolok (ntah kenapa Google begitu mencintai kata itu). Jadi, setiap request yang datang melalui proxy, akan disimpan ke dalam cache. Tujuannya, untuk mempercepat proses pelayanan.

Mempercepat proses pelayanan?

Begini bung, saat komputer ente merequest halaman web Google melalui proxy, halaman itu akan disimpan oleh proxy ke dalam cache. Tujuannya, bila suatu saat nanti anda merequest halaman yang sama, proxy tidak perlu lagi harus mengambilnya secara langsung ke komputer servernya Google. Proxy cukup mengambilnya di dalam cache. Dengan begitu, proses pelayanan request dari ente, jadi lebih cepat terlayani, betul?

Dengan cara kerja seperti ini, proxy sangat ideal digunakan oleh suatu sistem, dimana ada lebih dari satu user yang mengakses satu jalur yang sama untuk ke internet. Karena itulah, jaringan yang ada di kantor-kantor, sekolah, universitas, warnet, biasanya menggunakan proxy. Untuk jaringan-jaringan seperti itu, proxy memberikan lebih banyak keuntungan, dibanding kerugian.

Coba ente bayangin begini, saat seorang user merequest halaman web Google, proxy mengambilnya, menyampaikannya, kemudian menyimpannya ke dalam cache. Lalu, saat user lain juga ingin merequest halaman web Google, proxy tidak perlu lagi ke internet, dia cukup mengambilnya dari dalam cache. Dengan bagitu, traffik, lalu lintas jaringan, dan waktu menunggu, akan lebih hemat, cepat, dan singkat, betul?

Oo... jadi itu manfaatnya menggunakan proxy. Lalu, bagaimana dengan tipenya? Apa semua proxy itu sama? Atau proxy juga ada jenis dan kelompoknya? Dan kalo emang ada, tolong dong om... kasih tau apa dan bagaimana cara membedakannya. Ini minta tolong lho om... bukannya merintah dan maksa.

Betul bung, betul sekali, anda benar-benar belut... eh... betul. Jika dilihat berdasarkan cara kerjanya, Proxy memang terbagi dalam dua tipe. Tipe yang pertama disebut dengan Forward proxy server. Cara kerja Forward proxy ini adalah seperti contoh diatas. Dan user seharusnya tahu dan sadar kalo sebenarnya dia menggunakan proxy tipe ini.

Apa itu SSH dan cara menggunakan SSH

Secure Shell (SSH) adalah sebuah protokol jaringan kriptografi untuk komunikasi data yang aman, login antarmuka baris perintah, perintah eksekusi jarak jauh, dan layanan jaringan lainnya antara dua jaringan komputer. Ini terkoneksi, melalui saluran aman atau melalui jaringan tidak aman, server dan klien menjalankan server SSH dan SSH program klien secara masing-masing.[1] Protokol spesifikasi membedakan antara dua versi utama yang disebut sebagai SSH-1 dan SSH-2.

Aplikasi yang paling terkenal dari protokol ini adalah untuk akses ke akun shell pada sistem operasi mirip Unix, tetapi juga dapat digunakan dengan cara yang sama untuk akun pada Windows. Ia dirancang sebagai pengganti Telnet dan protokol remote shell lainnya yang tidak aman seperti rsh Berkeley dan protokol rexec, yang mengirim informasi, terutama kata sandi, dalam bentuk teks, membuat mereka rentan terhadap intersepsi dan penyingkapan menggunakan penganalisa paket.[2] Enkripsi yang digunakan oleh SSH dimaksudkan untuk memberikan kerahasiaan dan integritas data melalui jaringan yang tidak aman, seperti Internet.

Definisi
SSH menggunakan kriptografi kunci publik untuk mengotentikasi komputer remote dan biarkan komputer remote untuk mengotentikasi pengguna, jika perlu.[1] Ada beberapa cara untuk menggunakan SSH; salah satunya adalah dengan menggunakan secara otomatis public-privat key pasangan untuk dengan sederhana mengenkripsi koneksi jaringan, dan kemudian menggunakan otentikasi password untuk login.

Penggunaan yang lain dengan menghasilkan secara manual pasangan public-privat key untuk melakukan otentikasi, yang memungkinkan pengguna atau program untuk login tanpa harus menentukan password. Dalam skenario ini, siapa pun dapat menghasilkan pasangan yang cocok dari kunci yang berbeda (publik dan privat). Kunci publik ditempatkan pada semua komputer yang harus memungkinkan akses ke pemilik private key yang cocok (pemilik menjaga rahasia kunci privat). Sementara otentikasi didasarkan pada kunci privat, kunci itu sendiri tidak pernah ditransfer melalui jaringan selama otentikasi. SSH hanya memverifikasi apakah orang yang sama yang menawarkan kunci publik juga memiliki kunci pribadi yang cocok. Dalam semua versi SSH adalah penting untuk memverifikasi kunci publik yang tidak diketahui, yaitu mengaitkan kunci publik dengan identitas, sebelum menerima mereka dengan valid. Menerima serangan kunci publik tanpa validasi akan mengotorisasi penyerang yang tidak sah sebagai pengguna yang valid.

Sejarah
Pada tahun 1995, Tatu Ylönen, seorang peneliti di Helsinki University of Technology, Finlandia, merancang versi pertama protokol (sekarang disebut SSH-1) karena didorong oleh peristiwa serangan pembongkaran sandi di jaringan universitas. Tujuan dari pembuatan SSH adalah untuk menggantikan fungsi rlogin, TELNET, dan rsh protokol, yang tidak memberikan otentikasi kuat atau menjamin kerahasiaan. Ylönen merilis SSH sebagai freeware pada bulan Juli 1995, dan tool tersebut berkembang dengan cepat untuk mendapatkan popularitas. Menjelang akhir 1995, basis pengguna SSH telah tumbuh hingga 20.000 pengguna di lima puluh negara.

Pada bulan Desember 1995, Ylönen mendirikan SSH Communications Security untuk memasarkan dan mengembangkan SSH. Versi asli dari software yang digunakan SSH adalah berbagai potongan perangkat lunak bebas, seperti GNU libgmp, tetapi versi yang dikeluarkan oleh Secure SSH Communications semakin berkembang menjadi perangkat lunak berpemilik.

Pada tahun 1996, sebuah versi revisi protokol dirancang, SSH-2, yang tidak cocok dengan SSH-1. Fitur SSH-2 mencakup kedua fitur keamanan dan peningkatan perbaikan atas SSH-1. Keamanan yang lebih baik, misalnya, datang melalui algoritma pertukaran kunci Diffie-Hellman dan pemeriksaan dengan integritas yang kuat melalui kode otentikasi pesan. Fitur baru dari SSH-2 mencakup kemampuan untuk menjalankan sejumlah sesi shell melalui satu koneksi SSH.

Pada tahun 1998 ditemukan kerentanan yang digambarkan dalam 1,5 SSH sehingga memungkinkan masuknya konten yang tidak sah ke dalam aliran data SSH terenkripsi karena integritas data tidak mencukupi perlindungan dari CRC-32 yang digunakan dalam protokol versi ini. Sebuah perbaikan (SSH Compentation Attack Detector) diperkenalkan ke dalam banyak implementasi.

Pada tahun 1999, pengembang menginginkan versi perangkat lunak bebas untuk tersedia kembali seperti rilis 1.2.12, yang lebih tua dari program ssh asli, yang terakhir dirilis di bawah lisensi open source. OSSH Björn Grönvall ini kemudian dikembangkan berdasarkan basis kode ini. Tak lama kemudian, para pengembang OpenBSD menggunakan kode Grönvall untuk melakukan pengembanga yang lebih luas di atasnya, sehingga terciptalah OpenSSH, yang dimasukkan dalam rilis OpenBSD 2.6. Dari versi ini, sebuah cabang "portable" dibentuk untuk dapat memportingkan OpenSSH pada sistem operasi lain.

Diperkirakan, sejak tahun 2000, terdapat lebih dari 2.000.000 pengguna SSH.

Pada tahun 2005, OpenSSH adalah satu-satunya aplikasi ssh yang paling populer, yang diinstal secara default dalam sejumlah besar sistem operasi. Sementara itu, OSSH telah menjadi usang.

Pada tahun 2006, protokol SSH-2 yang telah disebutkan di atas, diusulkan untuk menjadi Standar Internet dengan penerbitan oleh IETF "secsh" work group dari RFC (lihat referensi).

Pada tahun 2008 sebuah kelemahan kriptografi ditemukan pada SSH-2 yang memungkinkan pengambilan sampai 4 byte plaintext dari aliran data SSH tunggal di bawah kondisi khusus. Namun hal ini telah diperbaiki dengan mengubah mode enkripsi standar OpenSSH 5,2.

Manfaat Menggunakan SSH 
 Manfaat menggunakan akun SSH adalah meningkatkan keamanan data pada komputer Anda ketika mengakses internet, karena dengan adanya Akun SSH sebagai perantara koneksi internet Anda, SSH akan memberikan enskripsi pada semua data yang terbaca, baru mengirimkannya ke server lain.

Selain dapat melakukan enskripsi data, SSH juga memiliki kemampuan melakukan Port Forwarding yang mana memungkinkan kita mendapatkan manfaat sebagai berikut ini:

1. Melakukan koneksi aplikasi TCP (misalnya : webserver, mail server, FTP server) dengan lebih secure (aman)
2. Melakukan koneksi dengan membypass (melewati) firewall atau proxy setempat.

Manfaat kedua diatas itulah yang sering dicari oleh para pengguna Internet dan memanfaatkannya untuk kepentingan akses internet. Dengan menggunakan Akun SSH Kita juga dapat mengelola VPS untuk dijadikan hosting ataupun fungsionalitas yang lain.

Menggunakan Akun SSH untuk tunneling koneksi internet Anda memang tidak menjamin meningkatkan speed internet Anda. Namun dengan menggunakan Akun SSH, otomatis IP yang Anda gunakan akan bersifat statis dan dapat Anda gunakan secara privat dengan catatan hanya Anda lah user dalam Akun SSH tersebut.

Protocol SSH ini memiliki banyak fungsi, selain fungsi tunneling yang sering kita gunakan, kita juga bisa menggunakan SSH untuk SFTP, SOCKS4/5 proxy atau bisa juga kita gunakan untuk mengatur VPS atau hosting milik kita khususnya VPS dengan OS Linux seperti CentOS.Untuk menggunakan tunneling menggunakan SSH ini kita bisa mengguankan SSH client seperti Bitvise Tunnelier ataupun Putty untuk sistem operasi Windows.

Untuk mendapatkan akun dan penggunaan dari SSH ini, kita bisa mendapatkan akun SSH gratis di cjb.net atau jika kita memiliki VPS biasanya pihak penyedia memberikan juga SSH untuk pengaturan VPS kita.

Dedicated Server Definition

Definition - What does Dedicated Server mean?

A dedicated server is a type of remote server that is entirely dedicated to an individual, organization or application. It is deployed, hosted and managed by a hosting, cloud or managed service provider (MSP).

A dedicated server is exclusive and not shared with any other customer, service or application.

In the Web hosting business, a dedicated server refers to the rental and exclusive use of a computer that includes a Web server, related software, and connection to the Internet, housed in the Web hosting company's premises. A dedicated server is usually needed for a Web site (or set of related company sites) that may develop a considerable amount of traffic - for example, a site that must handle up to 35 million hits a day. The server can usually be configured and operated remotely from the client company. Web hosting companies claim that the use of a dedicated server on their premises saves router, Internet connection, security system, and network administration costs.

In renting a dedicated server, the client company may be required to use a specified computer system or may be offered a choice of several systems. Some host providers allow a client company to purchase and install its own computer server at the host provider's location, a service known as colocation.

Typically, a dedicated server is rented that provides a stated amount of memory, hard disk space, and bandwidth ( here meaning the number of gigabytes of data that can be delivered each month). Some hosting companies allow the renter of a dedicated server to do virtual hosting, in turn renting services on the server to third parties for their Web sites. Domain name system, e-mail, and File Transfer Protocol (FTP) capabilities are typically included and some companies provide an easy-to-use control interface.

Operating system support
Availability, price and employee familiarity often determines which operating systems are offered on dedicated servers. Variations of Linux and Unix (open source operating systems) are often included at no charge to the customer. Commercial operating systems include Microsoft Windows Server, provided through a special program called Microsoft SPLA. Red Hat Enterprise is a commercial version of Linux offered to hosting providers on a monthly fee basis. The monthly fee provides OS updates through the Red Hat Network using an application called yum. Other operating systems are available from the open source community at no charge. These include CentOS, Fedora Core, Debian, and many other Linux distributions or BSD systems FreeBSD, NetBSD, OpenBSD.

Support for any of these operating systems typically depends on the level of management offered with a particular dedicated server plan. Operating system support may include updates to the core system in order to acquire the latest security fixes, patches, and system-wide vulnerability resolutions. Updates to core operating systems include kernel upgrades, service packs, application updates, and security patches that keep server secure and safe. Operating system updates and support relieves the burden of server management from the dedicated server owner.

Bandwidth and connectivity
Bandwidth refers to the data transfer rate or the amount of data that can be carried from one point to another in a given time period (usually a second) and is often represented in bits (of data) per second (bit/s). For example, visitors to your server, web site, or applications utilize bandwidth *Third – Total Transfer (measured in bytes transferred)

95th percentile method
Line speed, billed on the 95th percentile, refers to the speed in which data flows from the server or device, measured every 5 minutes for the month, and dropping the top 5% of measurements that are highest, and basing the usage for the month on the next-highest measurement. This is similar to a median measurement, which can be thought of as a 50th percentile measurement (with 50% of measurements above, and 50% of measurements below), whereas this sets the cutoff at 95th percentile, with 5% of measurements above the value, and 95% of measurements below the value. This is also known as Burstable billing. Line speed is measured in bits per second (or kilobits per second, megabits per second or gigabits per second).

Unmetered method
The second bandwidth measurement is unmetered service where providers cap or control the “top line” speed for a server. Top line speed in unmetered bandwidth is the total Mbit/s allocated to the server and configured on the switch level. For example, if you purchase 10 Mbit/s unmetered bandwidth, the top line speed would be 10 Mbit/s. 10 Mbit/s would result in the provider controlling the speed transfers take place while providing the ability for the dedicated server owner to not be charged with bandwidth overages. Unmetered bandwidth services usually incur an additional charge.

Total transfer method
Some providers will calculate the Total Transfer, which is the measurement of actual data leaving and arriving, measured in bytes. Although it is typically the sum of all traffic into and out of the server, some providers measure only outbound traffic (traffic from the server to the internet).

Bandwidth pooling
This is a key mechanism for hosting buyers to determine which provider is offering the right pricing mechanism of bandwidth pricing.[according to whom?] Most Dedicated Hosting providers bundle bandwidth pricing along with the monthly charge for the dedicated server. Let us illustrate this with the help of an example. An average $100 server from any of the common dedicated bandwidth providers would carry 2 TB of bandwidth. Suppose you purchased 10 servers then you would have the ability to consume 2 TB of bandwidth per server. However, let us assume that given your application architecture only 2 of these 10 servers are really web facing while the rest are used for storage, search, database or other internal functions then the provider that allows bandwidth pooling would let you consume overall 20 TB of bandwidth as incoming or outbound or both depending on their policy. The provider that does not offer bandwidth pooling would just let you use 4 TB of bandwidth, and the rest of the 16 TB of bandwidth would be practically unusable. This fact is commonly known by all hosting providers, and allows hosting providers to cut costs by offering an amount of bandwidth that frequently will not be used. This is known as overselling, and allows high bandwidth customers to use more than what a host might otherwise offer, because they know that this will be balanced out by those customers who use less than the maximum allowed.

One of the reasons for choosing to outsource dedicated servers is the availability of high powered networks from multiple providers. As dedicated server providers utilize massive amounts of bandwidth, they are able to secure lower volume based pricing to include a multi-provider blend of bandwidth. To achieve the same type of network without a multi-provider blend of bandwidth, a large investment in core routers, long term contracts, and expensive monthly bills would need to be in place. The expenses needed to develop a network without a multi-provider blend of bandwidth does not make sense economically for hosting providers.

Many dedicated server providers include a service level agreement based on network up-time. Some dedicated server hosting providers offer a 100% up-time guarantee on their network. By securing multiple vendors for connectivity and using redundant hardware, providers are able to guarantee higher up-times; usually between 99-100% up-time if they are a higher quality provider. One aspect of higher quality providers is they are most likely to be multi-homed across multiple quality up-link providers, which in turn, provides significant redundancy in the event one goes down in addition to potentially improved routes to destinations.

Bandwidth consumption over the last several years has shifted from a per megabit usage model to a per gigabyte usage model. Bandwidth was traditionally measured in line speed access that included the ability to purchase needed megabits at a given monthly cost. As the shared hosting model developed, the trend towards gigabyte or total bytes transferred, replaced the megabit line speed model so dedicated server providers started offering per gigabyte.

Prominent players in the dedicated server market offer large amounts of bandwidth ranging from 500 gigabytes to 3000 gigabytes using the “overselling” model. It is not uncommon for major players to provide dedicated servers with 1Terabyte (TB) of bandwidth or higher. Usage models based on the byte level measurement usually include a given amount of bandwidth with each server and a price per gigabyte after a certain threshold has been reached. Expect to pay additional fees for bandwidth overage usage. For example, if a dedicated server has been given 3000 gigabytes of bandwidth per month and the customer uses 5000 gigabytes of bandwidth within the billing period, the additional 2000 gigabytes of bandwidth will be invoiced as bandwidth overage. Each provider has a different model for billing. No industry standards have been set yet.

Management
Managed dedicated server
Dedicated hosting services primarily differ from managed hosting services in that managed hosting services usually offer more support and other services. As such, managed hosting is targeted towards clients with less technical knowledge, whereas dedicated hosting services, or unmanaged hosting services, are suitable for web development and system administrator professionals.[2]

To date, no industry standards have been set to clearly define the management role of dedicated server providers. What this means is that each provider will use industry standard terms, but each provider will define them differently. For some dedicated server providers, fully managed is defined as having a web based control panel while other providers define it as having dedicated system engineers readily available to handle all server and network related functions of the dedicated server provider.

Server management can include some or all of the following:

  • Operating system updates
  • Application updates
  • Server monitoring
  • SNMP hardware monitoring
  • Application monitoring
  • Application management
  • Technical support
  • Firewall services
  • Anti-spam software
  • Antivirus updates
  • Security audits
  • DDoS protection and mitigation
  • Intrusion detection
  • Backups and restoration
  • Disaster recovery
  • DNS hosting service
  • Load balancing
  • Database administration
  • Performance tuning
  • Software installation and configuration
  • User management
  • Programming consultation

Dedicated hosting server providers define their level of management based on the services they provide. In comparison, fully managed could equal self managed from provider to provider.

Administrative maintenance of the operating system, often including upgrades, security patches, and sometimes even daemon updates are included. Differing levels of management may include adding users, domains, daemon configuration, or even custom programming.

Dedicated server hosting providers may provide the following types of server managed support:

Fully managed – Includes monitoring, software updates, reboots, security patches and operating system upgrades. Customers are completely hands-off.
Managed – Includes medium level of management, monitoring, updates, and a limited amount of support. Customers may perform specific tasks.
Self-managed – Includes regular monitoring and some maintenance. Customers provide most operations and tasks on dedicated server.
Unmanaged – Little to no involvement from service provider. Customers provide all maintenance, upgrades, patches, and security.

Security
Dedicated hosting server providers utilize extreme security measures to ensure the safety of data stored on their network of servers. Providers will often deploy various software programs for scanning systems and networks for obtrusive invaders, spammers, hackers, and other harmful problems such as Trojans, worms, and crashers (Sending multiple connections). Linux and Windows use different software for security protection.

Software
Providers often bill for dedicated servers on a fixed monthly price to include specific software packages. Over the years, software vendors realized the significant market opportunity to bundle their software with dedicated servers. They have since started introducing pricing models that allow dedicated hosting providers the ability to purchase and resell software based on reduced monthly fees.

Microsoft offers software licenses through a program called the Service Provider License Agreement. The SPLA model provides use of Microsoft products through a monthly user or processor based fee. SPLA software includes the Windows Operating System, Microsoft SQL Server, Microsoft Exchange Server, Microsoft SharePoint and shoutcast hosting, and many other server based products.

Dedicated server providers usually offer the ability to select the software you want installed on a dedicated server. Depending on the overall usage of the server, this will include your choice of operating system, database, and specific applications. Servers can be customized and tailored specific to the customer’s needs and requirements.

Other software applications available are specialized web hosting specific programs called control panels. Control panel software is an all inclusive set of software applications, server applications, and automation tools that can be installed on a dedicated server. Control panels include integration into web servers, database applications, programming languages, application deployment, server administration tasks, and include the ability to automate tasks via a web based front end.

Most dedicated servers are packaged with a control panel. Control panels are often confused with management tools, but these control panels are actually web based automation tools created to help automate the process of web site creation and server management. Control panels should not be confused with a full server management solution by a dedicated hosting providers.

What Is SEO? and History

What Is SEO?
SEO stands for “search engine optimization.” It is the process of getting traffic from the “free,” “organic,” “editorial” or “natural” search results on search engines.
All major search engines such as Google, Bing and Yahoo have primary search results, where web pages and other content such as videos or local listings are shown and ranked based on what the search engine considers most relevant to users. Payment isn’t involved, as it is with paid search ads. 

Stands for "Search Engine Optimization." Just about every Webmaster wants his or her site to appear in the top listings of all the major search engines. Say, for example, that Bob runs an online soccer store. He wants his site to show up in the top few listings when someone searches for "soccer shoes." Then he gets more leads from search engines, which means more traffic, more sales, and more revenue. The problem is that there are thousands of other soccer sites, whose Webmasters are hoping for the same thing. That's where search engine optimization, or SEO, comes in.

SEO involves a number of adjustments to the HTML of individual Web pages to achieve a high search engine ranking. First, the title of the page must include relevant information about the page. In the previous example, Bob's home page might have the title, "Bob's Soccer Store -- Soccer Shoes and Equipment." The title is the most important part of SEO, since it tells the search engine exactly what the page is about. Within Bob's home page, it would be helpful to repeat the words "soccer" and "soccer shoes" a few times, since search engines also scan the text of the pages they index.

Finally, there are META tags. These HTML tags can really distinguish your site from the rest of the pile. The META tags that most search engines read are the description and keywords tags. Within the description tags, you should type a brief description of the Web page. It should be similar but more detailed than the title. Within the keywords tags, you should list 5-20 words that relate to the content of the page. Using META tags can significantly boost your search engine ranking.

So what happens when a bunch of sites all have similar titles, content, and META tags? Well, most search engines choose to list the most popular sites first. But then how do you get into the most popular sites? The best way is to submit your site to Web directories (not just search engines) and get other sites to link to yours. It can be a long climb to the top, but your perserverance will pay off. For more tips on SEO, visit the Submit Corner Web site.

History
Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.

Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997. Sullivan credits Bruce Clay as being one of the first people to popularize the term. On May 2, 2007,Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona that SEO is a "process" involving manipulation of keywords, and not a "marketing service."

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[dubious – discuss] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.

By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[8] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals. The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions.Patents related to search engines can provide information to better understand search engines.

In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users. In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.

In 2007, Google announced a campaign against paid links that transfer PageRank. On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting. As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.

On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."

Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.

In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.

In April 2012, Google launched the Google Penguin update the goal of which was to penalize websites that used manipulative techniques to improve their rankings on the search engine.

In September 2013, Google released the Google Hummingbird update, an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.

WordPress vs. Blogger – Which one is Better?

Important: Please note that this comparison is between self-hosted WordPress.org and Blogger, not WordPress.com vs Blogger. Please see our guide on the difference between self-hosted WordPress.org vs Free WordPress.com blog.

1. Ownership
Blogger is a blogging service provided by the tech giant Google. It is free, reliable most of the time, and quite enough to publish your stuff on the web. However, it is not owned by you. Google runs this service and has the right to shut it down, or shutdown your access to it at any time.
With WordPress, you use a WordPress hosting provider to host your own site. You are free to decide how long you want to run it and when you want to shut it down. You own all your data, and you control what information you share with any third party.

2. Control
Blogger is a fine tuned service with very limited tools allowing you to perform only specific tasks on your website. The things you can do on your blogspot blog are limited, and there is no way you can extend them to meet a need.
WordPress is an open source software, so you can easily extend it to add new features. There are thousands of WordPress plugins allowing you to modify and extend the default feature set such as adding a store to your website, creating portfolio, etc.
When comparing WordPress vs Blogger for business websites, then WordPress is hands down the best long-term solution for any serious business owner.

3. Appearance
WordPress Themes
Blogger by default only provides a limited set of templates to use. You can modify the colors and layout of these templates using the built-in tools, but you cannot create your own layouts or make modifications. There are some non-official Blogger templates available, but those templates are usually very low quality.
There are thousands of free and premium WordPress themes which allow you to create professional looking websites. There is a WordPress theme for just about every kind of website. No matter what your site is about, you will find plenty of high quality themes which are easy to modify and customize.

4. Portability
Moving your website
Moving your site from Blogger to a different platform is a complicated task. There is a significant risk that you will loose your SEO (search engine rankings(, subscribers, and followers during the move. Even though blogger allows you to export your content, your data will stay on Google’s servers for a very long time.
Using WordPress, you can move your site anywhere you want. You can move your WordPress site to a new host, change domain name, or even move your site to other content management systems.
Also if you compare WordPress vs Blogger SEO, then WordPress offers way more SEO advantages.

5. Security
Using Blogger you have the added advantage of Google’s robust secure platform. You don’t need to worry about managing your server’s resources, securing your blog, or creating backups.
WordPress is quite secure, but since it is a self hosted solution you are responsible for security and backups. There are plenty of WordPress plugins that make it easier for you.

6. Support
Support options
There is limited support available for Blogger. They have a very basic documentation and a user’s forum. In terms of support, your choices are very limited.
WordPress has a very active community support system. There is online documentation, community forums, and IRC chatrooms where you can get help from experienced WordPress users and developers. Apart from community support, there are many companies offering premium support for WordPress. Check out our guide on how to properly ask for WordPress support and get it.

7. Future
Blogger has not seen any major update since a very long time. We have seen Google kill their popular services such as Google Reader, Adsense for feeds, and the possible demise of FeedBurner. Future of Blogger depends on Google, and they have the right to shut it down whenever they want.
WordPress is an Open Source software which means its future is not dependent on one company or individual (Check out the history of WordPress). It is developed by a community of developers and users. Being world’s most popular content management system, thousands of businesses around the globe depend on it. The future of WordPress is bright and reassuring.
We hope this WordPress vs Blogger comparison helped you understand the pros and cons of each to help you make the right decision for your business. To learn more about WordPress, we recommend you to read our guide on Why is WordPress Free? and 9 most common misconceptions about WordPress.

Server Part 2

A server is a computer that provides data to other computers. It may serve data to systems on a local area network (LAN) or a wide area network (WAN) over the Internet.

Many types of servers exist, including web servers, mail servers, and file servers. Each type runs software specific to the purpose of the server. For example, a Web server may run Apache HTTP Server or Microsoft IIS, which both provide access to websites over the Internet. A mail server may run a program like Exim or iMail, which provides SMTP services for sending and receiving email. A file server might use Samba or the operating system's built-in file sharing services to share files over a network.

While server software is specific to the type of server, the hardware is not as important. In fact, a regular desktop computers can be turned into a server by adding the appropriate software. For example, a computer connected to a home network can be designated as a file server, print server, or both.

While any computer can be configured as a server, most large businesses use rack-mountable hardware designed specifically for server functionality. These systems, often 1U in size, take up minimal space and often have useful features such as LED status lights and hot-swappable hard drive bays. Multiple rack-mountable servers can be placed in a single rack and often share the same monitor and input devices. Most servers are accessed remotely using remote access software, so input devices are often not even necessary.

While servers can run on different types of computers, it is important that the hardware is sufficient to support the demands of the server. For instance, a web server that runs lots of web scripts in real-time should have a fast processor and enough RAM to handle the "load" without slowing down. A file server should have one or more fast hard drives or SSDs that can read and write data quickly. Regardless of the type of server, a fast network connection is critical, since all data flows through that connection.

1) In information technology, a server is a computer program that provides services to other computer programs (and their users) in the same or other computers.

2) The computer that a server program runs in is also frequently referred to as a server (though it may be used for other purposes as well).

3) In the client/server programming model, a server is a program that awaits and fulfills requests from client programs in the same or other computers. A given application in a computer may function as a client with requests for services from other programs and also as a server of requests from other programs.

Specific to the Web, a Web server is the computer program (housed in a computer) that serves requested HTML pages or files. A Web client is the requesting program associated with the user. The Web browser in your computer is a client that requests HTML files from Web servers.


Top-Level Domain (TLD) Definition

"TLD" and "TLDN" redirect here. For Temporary Location Directory Number, see Mobile Station Roaming Number. For other uses, see TLD (disambiguation). A top-level domain (TLD) is one of the domains at the highest level in the hierarchical Domain Name System of the Internet.[1] The top-level domain names are installed in the root zone of the name space. For all domains in lower levels, it is the last part of the domain name, that is, the last label of a fully qualified domain name. For example, in the domain name www.example.com, the top-level domain is com. Responsibility for management of most top-level domains is delegated to specific organizations by the Internet Corporation for Assigned Names and Numbers (ICANN), which operates the Internet Assigned Numbers Authority (IANA), and is in charge of maintaining the DNS root zone.

A top-level domain (TLD) is the last segment of the domain name. The TLD is the letters immediately following the final dot in an Internet address.
A TLD identifies something about the website associated with it, such as its purpose, the organization that owns it or the geographical area where it originates. Each TLD has a separate registry managed by a designated organization under the direction of the Internet Corporation for Assigned Names and Numbers (ICANN).

In our Internet address, http://whatis.techtarget.com: com is the top-level domain name; techtarget.com is the second-level domain name; and whatis is a subdomain name. All together, these constitute a fully-qualified domain name (FQDN); the addition of HTTP:// makes an FQDN a complete URL.

ICANN identifies the following categories of TLDs:

Country-code top-level domains (ccTLD) -- Each ccTLD identifies a particular country and is two letters long. The ccTLD for the United States, for example, is .us
Infrastructure top-level domain -- There is only one TLD in this group, ARPA (Address and Routing Parameter Area). The Internet Assigned Numbers Authority (IANA) manages this TLD for the IETF.
Sponsored top-level domains (sTLD): These are overseen by private organizations.
Generic top-level domains (gTLD) -- These are the most common and familiar TLDs. Examples include "com" for "commercial" and "edu" for "educational." Most gTLDs are open for registration by anyone, but there is also a subgroup that is more strictly controlled.
In April 2009, ICANN proposed an expansion of the TLD system to allow anyone to register and reserve any unused letter sequence as a TLD for their exclusive use. A company that sold software, for example, might like to use .soft as a TLD. According to ICANN chief executive Paul Levins, such an expansion could lead to thousands of new TLDs in the next few years.

History
Originally, the top-level domain space was organized into three main groups: Countries, Categories, and Multiorganizations. An additional temporary group consisted of only the initial DNS domain, arpa and was intended for transitional purposes toward the stabilization of the domain name system.

Types of TLDs
IANA today distinguishes the following groups of top-level domains:

Descriptive top-level domains such as .guru
country-code top-level domains (ccTLD): Two-letter domains established for countries or territories. With some historical exceptions, the code for any territory is the same as its two-letter ISO 3166 code.
internationalized country code top-level domains (IDN ccTLD): ccTLDs in non-Latin character sets (e.g., Arabic or Chinese).
Test IDN TLDs were installed under test for testing purposes in the IDN development process.
generic top-level domains (gTLD): Top-level domains with three or more characters
unsponsored top-level domains: domains that operate directly under policies established by ICANN processes for the global Internet community.
sponsored top-level domains (sTLD): These domains are proposed and sponsored by private agencies or organizations that establish and enforce rules restricting the eligibility to use the TLD. Use is based on community theme concepts.
infrastructure top-level domain: This group consists of one domain, the Address and Routing Parameter Area (ARPA). It is managed by IANA on behalf of the Internet Engineering Task Force for various purposes specified in the Request for Comments publications.
Countries are designated in the Domain Name System by their two-letter ISO country code; there are exceptions, however (e.g., .uk). This group of domains is therefore commonly known as country-code top-level domains (ccTLD). Since 2009, countries with non–Latin-based scripts may apply for internationalized country code top-level domain names, which are displayed in end-user applications in their language-native script or alphabet, but use a Punycode-translated ASCII domain name in the Domain Name System.

Generic top-level domains (formerly Categories) initially consisted of gov, edu, com, mil, org, and net. More generic TLDs have been added, such as info.

The authoritative list of currently existing TLDs in the root zone is published at the IANA website at https://www.iana.org/domains/root/db/.

Internationalized country code TLDs
An internationalized country code top-level domain (IDN ccTLD) is a top-level domain with a specially encoded domain name that is displayed in an end user application, such as a web browser, in its language-native script or alphabet, such as the Arabic alphabet, or a non-alphabetic writing system, such as Chinese characters. IDN ccTLDs are an application of the internationalized domain name (IDN) system to top-level Internet domains assigned to countries, or independent geographic regions.

ICANN started to accept applications for IDN ccTLDs in November 2009,[6] and installed the first set into the Domain Names System in May 2010. The first set was a group of Arabic names for the countries of Egypt, Saudi Arabia, and the United Arab Emirates. By May 2010, 21 countries had submitted applications to ICANN, representing 11 scripts.

Self Hosting

Self hosting is the act of having your website totally under your control. This can include you managing all whole aspects of it, from setting up the web server to installing software, to simply managing your weblog software like WordPress. This is usually accomplished by either renting your own server/virtual machine or, more commonly by using shared hosting. The later choice allows you to have low costs while giving up some control but generally not in the areas that you are interested in.

The opposite of self-hosting is free hosting, where you use a free service to host your site. Usually this is done either through Blogger, WordPress.com, Livejournal or any other popular system. This means that you have very little control over your site other than the content and you have to follow specific TOS, which are very frequently quite limited.

Now, If you are a blogger you will sooner or later create a strong connection to where you make your little place on the web. Your blog becomes sort of an online home with a much more value than any shallow social network profile. It is the place where your character or professionalism is shown to the world at large and with time you may find that you have invested a lot of effort and time into it.

It is for this reason that I consider that for anyone who views blogging even as a hobby, self-hosting their own site is the better choice. I understand however that there are a lot of misconceptions about what self-hosting entails and you can see in the following poll result the reasons so many people avoid it

To this end, I’ve written the following articles in the hope to counter some of these misconceptions.

The term self-hosting was coined to refer to the use of a computer program as part of the toolchain or operating system that produces new versions of that same program—for example, a compiler that can compile its own source code. Self-hosting software is commonplace on personal computers and larger systems. Other programs that are typically self-hosting include kernels, assemblers, command-line interpreters and revision control software.

If a system is so new that no software has been written for it, then software is developed on another self-hosting system and placed on a storage device that the new system can read. Development continues this way until the new system can reliably host its own development. Writing new software development tools "from the metal" (that is, without using another host system) is rare and in many cases impractical.

For example, Ken Thompson started development on Unix in 1968 by writing and compiling programs on the GE-635 and carrying them over to the PDP-7 for testing. After the initial Unix kernel, a command interpreter, an editor, an assembler, and a few utilities were completed, the Unix operating system was self-hosting -- programs could be written and tested on the PDP-7 itself.

For example, development of the Linux kernel was initially hosted on a Minix system. When sufficient packages, like GCC, GNU bash and other utilities are ported over, developers can work on new versions of Linux kernel based on older versions of itself (like building kernel 3.21 on a machine running kernel 3.18). This procedure can also be felt when building a new linux distribution from scratch.

History

Main article: History of compiler writing
The first self-hosting compiler (excluding assemblers) was written for Lisp by Hart and Levin at MIT in 1962. They wrote a Lisp compiler in Lisp, testing it inside an existing Lisp interpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.

The compiler as it exists on the standard compiler tape is a machine language program that was obtained by having the S-expression definition of the compiler work on itself through the interpreter.

—AI Memo 39
This technique is only possible when an interpreter already exists for the very same language that is to be compiled. It borrows directly from the notion of running a program on itself as input, which is also used in various proofs in theoretical computer science, such as the proof that the halting problem is undecidable.