Blog

Wednesday, 20 July 2016 05:15

The Periodic Table Of SEO Success Factors

Written by

  Search engine optimization — SEO — may seem like alchemy to the uninitiated. But there is a science to it. Search engines reward pages with the right combination of ranking factors or “signals.” SEO is about ensuring your content generates the right type of signals.

Above chart summarizes the major factors to focus on for search engine ranking success

The olden days are a little older than you might think…

From the simplest to the most sophisticated, all computer programs rely on very simple instructions to perform basic functions: comparing two values, adding two numbers, moving items from one place to another. In modern systems, such instructions are generated by a compiler from a program in a high-level language, but early machines were so limited in memory and processing power that every instruction had to be spelled out completely, and mathematicians took up pencil and paper to manually work out formulas for configuring the machines – even before there were machines to configure.

“If you really want to look at the olden days, you want to start with Charles Babbage,” says Armando Solar-Lezama, assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Babbage designed an analytical engine – a mechanical contraption outfitted with gears and levers – that could be programmed to perform complicated computations. His collaborator, Ada Lovelace (daughter of poet Lord Byron), recognized the potential of the machine, too, and in 1842 wrote what’s considered to be the first computer program. Herlengthy algorithm was created specifically for computing Bernoulli numbers on Babbage’s machine – had it ever actually been built.

By the early 20th century, though, working computers existed consisting of plug boards and cables connecting modules of the machine to one another. “They had giant switchboards for entering tables of values,” says Solar-Lezama. “Each row had a switch with 10 positions, one for each digit. The operator flipped the switches and reconfigured the plugs in order to set the values in the table.”

Before long, programmers realized it was possible to wire the machine in such a way that each row of switches would be interpreted as an instruction in a program. The machine could be reprogrammed by flipping switches rather than having to rewire it every time – not that writing such a program was easy. Even in later machines that used punched tapes or cards in place of switchboards, instructions had to be spelled out in detail. “If you wanted a program to multiply 5 + 7 by 3 + 2,” says Solar-Lezama, “you had to write a long sequence of instructions to compute 5+7 and put that result in one place. Then you’d write another instruction to compute 3+2, put that result in another place, and then write the instruction to compute the product of those two results.”

That painstaking process became a thing of the past in the late 1950s with Fortran, the first automated programming language. “Fortran allowed you to use actual formulas that anyone could understand,” says Solar-Lezama. Instead of a long series of instructions, programmers could simply use recognizable equations and linguistic names for memory addresses. “Instead of telling the computer to take the value in memory address 02739, you could tell it to use the value X,” he explains.

Today’s programming software can take programs written at a very high-level and compile them into sequences of billions of instructions that a computer can understand. But programmers are still faced with the task of specifying their computation at the correct level of detail, precision, and correctness. “Essentially, programming has always been about figuring out the right strategy for a machine to perform the computation that you want,”

source:- http://engineering.mit.edu/ask/how-did-people-olden-days-create-software-without-any-programming-software

Below is a visual history of "search" and search engines; hopefully it's both a trip down memory lane and a useful resource for anyone looking to learn a bit more about the history of Internet search engines. 

 WordStream's search engine history timeline is as shown below

 The History of Search Engines

Modern search engines are pretty incredible – complex algorithms enable search engines to take your search query and return results that are usually quite accurate, presenting you with valuable information nuggets amidst a vast information data mine.

Search engines have come a long way since their early prototypes, as our Internet Search Engines History infographic illustrates. From improvements in web crawlers and categorizing and indexing the web, to introducing new protocols such as robots.txt so that webmasters have control what web pages get crawled, the development of search engines has been the culmination of multiple search technologies that developed from different search engines. Alta Vista was the first search engine to process natural language queries; Lycos started strong with a system categorizing relevance signals, matching keywords with prefixes and word proximity; and Ask Jeeves introduced the use of human editors to match actual user search queries,

How Do Search Engines Work?

First of all, let's ask what is a search engine? A search engine is a program that searches the web for sites based on your keyword search terms. The search engine takes your keyword and returns search engine results pages (SERP), with a list of sites it deems relevant or connected to your searched keyword.

The goal for many sites is to appear in the first SERP for the most popular keywords related to their business. A site's keyword ranking is very important because the higher a site ranks in the SERP, the more people will see it.

SEO, or search engine optimization, is the method used to increase the likelihood of obtaining a first page ranking through techniques such as link building, SEO title tags, content optimization, meta description, and keyword research.

Google search engines and other major search engines like Bing and Yahoo use large, numerous computers in order to search through the large quantities of data across the web.

Web search engines catalog the world wide web by using a spider, or web crawler. These web-crawling robots were created for indexing content; they scan and assess the content on site pages and information archives across the web.

Algorithms and Determining the Best Search Engines

Different internet search engines use different algorithms for determining which web pages are the most relevant for a particular search engine keyword, and which web pages should appear at the top of the search engine results page.

Relevancy is the key for online search engines – users naturally prefer a search engine that will give them the best and most relevant results.

Search engines are often quite guarded with their search algorithms, since their unique algorithm is trying to generate the most relevant results. The best search engines, and often the most popular search engines as a result, are the ones that are the most relevant.

Search Engine History

Search engine history all started in 1990 with Archie, an FTP site hosting an index of downloadable directory listings. Search engines continued to be primitive directory listings, until search engines developed to crawling and indexing websites, eventually creating algorithms to optimize relevancy.

Yahoo started off as just a list of favorite websites, eventually growing large enough to become a searchable index directory. They actually had their search services outsourced until 2002, when they started to really work on their search engine.

History of Google Search Engine

Google's unique and improving algorithm has made it one of the most popular search engines of all time. Other search engines continue to have a difficult time matching the relevancy algorithm Google has created by examining a number of factors such as social media, inbound links, fresh content, etc.

As evidenced by the above infographic, Google appeared on the search engine scene in 1996. Google was unique because it ranked pages according to citation notation, in which a mention of one site on a different website became a vote in that site's favor. This was something that search engines.

Google also began judging sites by authority. A website's authority, or trustworthiness, was determined by how many other websites were linking to it, and how reliable those outside linking sites were.

Google search history can be witnessed by taking a look at Google's homepage progressions over the years. It's remarkable to see how basic and primitive the now most popular search engine once was.

Google Search Engine History: Looking In To the Past

A picture of the original 1997 Google search engine homepage, back when Google was part of stanford.edu. 

 

Google search engine homepage in 2005

The modern, minimalist Google of 2011.

source:-http://www.wordstream.com/articles/internet-search-engines-history

 

Programmers have always known that new programming languages need to be learned to keep their skills marketable in the workplace. That trend is not only continuing – it seems to be increasing due to the rate of change taking place in the technology sector.

Programming languages like C, C++, Java, HTML, Python, or PHP have always had answers to the demands of the market. However, progression in the innovation sector requires people to gain even more skills and knowledge to bring ideas to life.

Even though programming languages like Java, HTML, Objective C, remain the backbone of any development in IT, there have been some new and interesting programming languages that have gained impressive reviews and high ratings among the tech gurus across the world. Below are the list of new programming languages to learn and keep watch of in 2016

1.Google Go 

Google’s Go Programming Language was created in 2009 by three Google employees, Robert Griesemer, Rob Pike, and Ken Thompson. The language’s success can be seen clearly by the fact that BBC, SoundCloud, Facebook and UK Government’s official website are some of the notable users of Go. It is faster, easier to learn and does the same job that C++ or Java has been doing for us. As the creators said, “Go is an attempt to combine the ease of programming of an interpreted, dynamically typed language with the efficiency and safety of a statically typed, compiled language.

2. Swift

When a programming language is launched at the Apple’s WWDC, you can be sure that it has something that can deliver success and results. Swift was released in the Apple’s WWDC in 2014 and its exponential growth in just one year shows how capable and promising this language is. According to Apple, Swift brings the best of Python and Ruby together and adds modern programming fundamentals, to make it more effective and fun. If you’ve been using or were planning on learning Objective C to develop iOS apps, don’t bother learning it. Swift is the language you need to know moving forward. There will soon come a day when Objective C is used by nobody to develope apps.

3. Hack

Just like Swift, Hack is another programming language which has recently been launched and is a product of another tech giant, Facebook. In the past one year, Facebook has transformed almost their entire PHP codebase to Hack, and if a website with millions of users and unparalleled traffic can rely on Hack, then the programming language must surely be here to stay.  

4. Rust

The Rust Programming Language was launched in 2014 by Mozilla. It did not receive the immediate success that Hack and Go did, but in the last 6 months the number of Rust users in the world has escalated and it is expected to climb much higher. An upgrade to C and C++, Rust is becoming more beloved by programmers every day.  

5. Julia

Delivering Hadoop style parallelism, Julia’s stock in the tech industry is rising. The Julia Language is highlighted as one that is destined to make a major impact in the future. Described as a high level, high performance, dynamic programming language for technical computing, Julia is making a niche of its own in the world of programming languages.  

6. Scala

The Scala Programming Language has been on the market for a little longer than most of the other languages in this list and was probably a little slow to get off the blocks as compared to the other langua7 New Programming Languages To Learn in 2016 ges. However; this functional and highly scalable programming languages has gradually attracted attention and companies such as Twitter, LinkedIn and Intel are using the language in their system now.  

7. Dart

Given that Google Go has garnered such unprecedented success, the other language from Google – Google Dart – has been in its shadows for the past 7-8 months. However, now that app development is gaining pace, people are realising how useful Dart can be in implementing high performance architecture and performing modern app development. Unveiled as a substitute for Javascript for browser apps, Dart is finally realising its true potential and is expected to continue its rise in the coming years.  

source:-http://www.codingdojo.com/blog/new-programming-languages-to-learn-2016/

In a modern, multicore chip, every core—or processor—has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access.

If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data.

That directory takes up a significant chunk of memory: In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count. Envisioned chips with 128, 256, or even 1,000 cores will need a more efficient way of maintaining cache coherence.

At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory's memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.

In a 128-core chip, that means that the new technique would require only one-third as much memory as its predecessor. With Intel set to release a 72-core high-performance chip in the near future, that's a more than hypothetical advantage. But with a 256-core chip, the space savings rises to 80 percent, and with a 1,000-core chip, 96 percent.

When multiple cores are simply reading data stored at the same location, there's no problem. Conflicts arise only when one of the cores needs to update the shared data. With a directory system, the chip looks up which cores are working on that data and sends them messages invalidating their locally stored copies of it.

"Directories guarantee that when a write happens, no stale copies of the data exist," says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. "After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order."

Wednesday, 13 July 2016 04:54

Importance of Software Engineering in modern era

Written by

Software engineering is the study and application of engineering to the design, development, and maintenance of software. Typical formal definitions of software engineering are: “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software. Software engineering is relatively a new area of engineering though, but the scope of software engineering is extremely broad. Being one of the prominent branches of the field of Engineering, it’s growing among the fastest fields in the world today.

It must be noted that the term software development can be used for every type of software development whether it’s as simple as visual basic for applications Modules for Microsoft Word, Excel or Access or developing large, expensive and complicated applications for businesses or creating software for gaming entertainment.

Software engineers are the computer programming professionals. It’s worth mentioning that a software engineer is also a programmer, as he writes codes, but a programmer may not be called a software engineer, because in the former case, one needs to have a formal education. Besides, a software engineer is the one who follows a systematic process that leads to understanding the requirements, working with teams and various professionals in order to create the application software or components or modules that fulfill the specific needs of the users successfully; whereas a computer programmer can work independently, as he understands algorithms and knows how to create codes following the specifications given by the software engineers. However, software engineering is a vast field. It is not just limited to computer programming, but it’s much more than computer programming. It covers a wide range of professions from business to graphic designs or video game development.

Not just in a specific field, but every field of work, specific software is needed. Since the software is developed and embedded in the machines in order that it could meet with all intents and purposes of the users belonging to various professions, software engineering is of great application and assistance. Not only the field of software engineering involves using some common computer languages, such as, C, C++, Java, python and Visual Basic in an appropriate manner that the intended results may be attained, but it also leads to apply the concepts in such a way that the development of the software may be made effectively and efficiently.

Software engineers or developers are the creative minds behind computers or programs. Some develop the application software for clients and companies analyzing the needs of the users. Some develop the system software used to run the devices and to control the networks. Whatever be the nature of work, software engineering is one of highest-paid fields in this modern day and age. It’s an up-and-coming field, as it’s believed that it’s likely to grow much faster than the average compared to other professions. If you have strong problem solving skills, an eye for details and good understanding at mathematical functions, then you may consider this lucrative field of study that could give you various benefits including higher level of job satisfaction recompensing your creative efforts

source:-http://fareedsiddiqui.com/

Mice, and now touchscreens, have become a daily part of our lives in the way we interact with computers. But what about people who lack the ability to use a mouse or touchscreen? Or situations where these would be impractical or outright dangerous?

Many researchers have explored eye-gaze tracking as a potential control mechanism. These tracking mechanisms have become sophisticated and small enough that they currently feature in devices such as smartphones and tablets. But on their own, these mechanisms may not offer the precision and speed needed to perform complex computing tasks.

Now, a team of researchers at the Department of Engineering has developed a computer control interface that uses a combination of eye-gaze tracking and other inputs. The team's research was published in a paper, 'Multimodal Intelligent Eye-Gaze Tracking System', in the International Journal of Human-Computer Interaction.

Dr Pradipta Biswas, Senior Research Associate in the Department's Engineering Design Centre, and the other researchers provided two major enhancements to a standalone gaze-tracking system. First, sophisticated software interprets factors such as velocity, acceleration and bearing to provide a prediction of the user's intended target. Next, a second mode of input is employed, such as a joystick.

"We hope that our eye-gaze tracking system can be used as an assistive technology for people with severe mobility impairment," Pradipta said. "We are also exploring the potential applications in military aviation and automotive environments where operators' hands are engaged with controlling an aircraft or vehicle."

The selection problem

One challenge that arises when designing such a system is that once the target is selected, how does the user indicate a desire for selection? On a typical personal computer, this is accomplished with a click of the mouse; with a phone or tablet, a tap on the screen.

Basic eye-gaze tracking systems often use a signal such as blinking the eyes to indicate this choice. However, blinking is not often ideal. For example, in combat situations, pilots' eyes might dry up, precluding their ability to blink at the right time.

Pradipta's team experimented with several ways to solve the selection problem, including manipulating joystick axes, enlarging predicted targets, and using a spoken keyword such as 'fire' to indicate a target.

Unsurprisingly, they found that a mouse remains the fastest and least-cognitively stressful method of selecting a target – possibly assisted by the fact that most computer users are already comfortable with this technique. But, a multimodal approach combining eye-gaze tracking, predictive modelling, and a joystick can almost match a mouse in terms of accuracy and cognitive load. Further, when testing computer novices and with sufficient training in the system, the intelligent multimodal approach can even be faster.

The hope is that these revelations will lead to systems that perform as well – or better – than a mouse. "I am very excited for the prospects of this research," Pradipta said. "When clicking a mouse isn't possible for everyone, we need something else that's just as good.

source:-http://phys.org/news/2015-04-techniques-eye-gaze-tracking-interaction.html#nRlv

 

Displays that can be folded and rolled up have been shown in prototype smartphones, wearables, and other devices -- but when will such products be available?

Advances in technology suggest they aren't too far off in the future. Such devices could start showing up as early as next year or 2018, said Jerry Kang, senior principal analyst for emerging display technologies and OLED at IHS.

Manufacturers are trying to launch them in devices like tablets that can fold into a smartphone-size device. It's possible to use these displays in wearable devices, but reliability, weight and battery life need to be considered, Kang said.

Small folding screens will likely come before larger ones, mainly due to the economics of making such displays, Kang said.

The displays will be based on OLED (organic light-emitting diode), which is considered a successor to current LED technology. OLEDs don't have lighting back-panels, making them thinner and more power efficient.

 At CES this year, LG showed a stunningly thin paper-like display that could roll up. The company projects it will deliver foldable OLEDs by next year.

There are advantages to screens that can be folded or rolled up. They could lead to innovative product designs and increase the mobility of devices, Kang said.

For example, it could be easier to fit screens around the contours of a battery and other components. It will also provide a level of flexibility in how a user can change the shape of a device.But challenges remain in making such screens practical, Kang said.

 The size of batteries and circuits are of lesser concern in designing bendable screens, Kang said. The screens can be folded around components. Displays that can fold and roll are an extension of flexible displays, which are already in wearables, smartphones and TVs. For example, some TVs have flexible screens that are designed so that they can be slightly curved.

Samsung and LG started using flexible AMOLED displays in smartphones in 2013 and are adapting those screens for wearables. Those companies are also leading the charge to bring displays that can bend and fold to devices. 

 The sorts of flexible displays that are used in curved products are still in their infancy, but IHS projects such screens to continue siphoning market share from non-flexible displays. In 2022, 433.3 million flexible displays will ship, compared to 3.6 billion units of non-flexible displays.

source:-http://www.infoworld.com/

Sometimes Windows needs a completely fresh start.

Sometimes Windows needs a fresh start—maybe a program’s gone awry or a file’s been corrupted. Luckily, Windows 10 lets you do this with a few clicks.

Windows 10 has an option where you can reinstall Windows and wipe your programs, but it keeps your files intact. Note that this won’t get rid of any “bonus” bloatware programs your PC vendor put on your computer before you bought it—you’ll have to do that manually—but it will get rid of any software you or someone else installed afterward.

Even though Windows says it’ll keep your files intact, it always pays to back up your PC or at least the important files before you do anything like this.

Ready? Okay. Hit the Start button and go to Settings. In Settings, selectUpdate and Security, and in there, select Recovery.

At the top of the Recovery section you’ll see Reset this PC. Click the Get Started button—don’t worry, you’ve still got one more step—and then you get to choose an option. In this case, we’re choosing Keep my files, and the dialog box reminds you that this will remove your apps and settings. Then you just sit back and let Windows do its thing. It may take a while. When it’s done, you should have a fresh Windows installation, and unless you’re very unlucky, your personal files will still be right where you left them.

source:-http://www.computerworld.in/

Friday, 08 July 2016 04:40

A Short History of Computer Viruses

Written by

Computers and computer users are under assault by hackers like never before, but computer viruses are almost as old as electronic computers themselves. Most people use the term “computer virus” to refer to all malicious software, which we call malware. Computer Viruses are actually just one type of malware, a self-replicating programs designed to spread itself from computer to computer. A virus is, in fact, the earliest known malware invented. 

The following is a history of some of the most famous viruses and malware ever:

1949 – 1966 – Self-Reproducing Automata: Self-replicating programs were established in 1949, to produce a large number of viruses, John von Neumann, whose known to be the “Father of Cybernetics”, wrote an article on the “Theory of Self-Reproducing Automata” that was published in 1966.

1959 – Core Wars: A computer game was programmed in Bell Laboratory by Victor Vysottsky, H. Douglas McIlroy and Robert P Morris. They named it Core Wars. In this game, infectious programs named organisms competed with the processing time of PC.

1971 The Creeper: Bob Thomas developed an experimental self-replicating program. It accessed through ARPANET (The Advanced Research Projects Agency Network) and copied to a remote host systems with TENEX operating system. A message displayed that “I’m the creeper, catch me if you can!”. Another program named Reaper was created to delete the existing harmful program the Creaper.

1974 – Wabbit (Rabbit): This infectious program was developed to make multiple copies of itself on a computer clogging the system reducing the performance of the computer.

1974 – 1975 – ANIMAL: John Walker developed a program called ANIMAL for the UNIVAC 1108. This was said to be a non-malicious Trojan that is known to spread through shared tapes.

1981- Elk Cloner: A program called the “Elk Cloner” was developed by Richard Skrenta for the Apple II Systems. This was created to infect Apple DOS 3.3. These programs started to spread through files and folders that are transferred to other computers by floppy disk.

1983 – This was the year when the term “Virus” was coined by Frederick Cohen for the computer programs that are infectious as it has the tendency to replicate.

1986 – Brain: This is a virus also known as the “Brain boot sector”, that is compatible with IBM PC was programmed and developed by two Pakistani programmers Basit Farooq Alvi, and his brother, Amjad Farooq Alvi.

1987- Lehigh: This virus was programmed to infect command.com files from Yale University.

Cascade: This virus is a self-encrypted file virus which was the outcome of IBM’s own antivirus product.

Jerusalem Virus: This type of virus was first detected in the city of Jerusalem. This was developed to destroy all files in an infected computers on the thirteenth day that falls on a Friday.

1988 – The Morris Worm: This type of worm was created by Robert Tappan Morris to infect DEC VAX and Sun machines running BSD UNIX through the Internet. This is best known for exploiting the computers that are prone to buffer overflow vulnerabilities.

1990 – Symantec launched one of the first antivirus programs called the Norton Antivirus, to fight against the infectious viruses. The first family of polymorphic virus called the Chameleon was developed by Ralf Burger.

1995 – Concept: This virus name Concept was created to spread and attack Microsoft Word documents.

1996 – A macro virus known as Laroux was developed to infect Microsoft Excel Documents, A virus named Baza was developed to infect Windows 95 and Virus named Staog was created to infect Linux.

1998 – CIH Virus: The release of the first version of CIH viruses developed by Chen Ing Hau from Taiwan.

1999 – Happy99: This type of worm was developed to attach itself to emails with a message Happy New Year. Outlook Express and Internet Explorer on Windows 95 and 98 were affected.

2000 – ILOVEYOU: The virus is capable of deleting files in JPEGs, MP2, or MP3 formats.

2001 – Anna Kournikova: This virus was spread by emails to the contacts in the compromised address book of Microsoft Outlook. The emails purported to contain pictures of the very attractive female tennis player, but in fact hid a malicious virus.

2002 – LFM-926: This virus was developed to infect Shockware Flash files. Beast or RAT: This is backdoor Trojan horse and is capable of infecting all versions of Windows OS.

2004 – MyDoom: This infectious worm also called the Novang. This was developed to share files and permits hackers to access to infected computers. It is known as the fastest mailer worm.

2005 – Samy XXA: This type of virus was developed to spread faster and it is known to infect the Windows family.

2006 – OSX/Leap-A: This was the first ever known malware discovered against Mac OS X. Nyxem: This type of worm was created to spread by mass-mailing, destroying Microsoft Office files.

2007 – Storm Worm: This was a fast spreading email spamming threat against Microsoft systems that compromised millions of systems.

Zeus: This is a type of Trojan that infects used capture login credentials from banking web sites and commit financial fraud.

2008 – Koobface: This virus was developed and created to target Facebook and MySpace users.

2010 – Kenzero: The is a virus that spreads online between sites through browsing history.

2013 – Cryptolocker: This is trojan horse encrypts the files infected machine and demands a ransom to unlock the files. 2014 – Backoff: Malware designed to compromise Point-of-Sale (POS) systems to steal credit card data.

source:-https://antivirus.comodo.com

 

About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018

Map

Manomaya Software Services, 1st Floor, KLE Tech Park, BVB Engg. College Campus, Vidyanagar,