Mobile SEO is a trending topic, and for decent reasons. Today’s world is a Digital world. The number of people dependent on mobile devices is increasing progressively. Millions of people these days access the web using smartphones running on Android, iOS, or Windows. If you want to extend your business, you have to adapt to this constantly changing atmosphere and make suitable changes in your website design to attract more viewership.

Mobile has become the primary device used to access websites. The number of mobile surfers has surpassed the desktop users, but this didn’t come as a surprise because, as far back as 2015, Google said that more searches were conducted on mobile as compared to any other device category. Google indicated mobile friendliness as a ranking factor in April 2015, and since then, everything on the Web is about mobile.

What is Mobile SEO?

Mobile Search Engine Optimization is the procedure of making a website suitable for viewing on mobile devices of varied screen sizes and load times. Most important of all, it should show up in Google search results.

It should not be too difficult for business to optimize their site for mobile devices if the site is already optimized for search engines.

We can classify the steps into three main categories:

  1. Select a Mobile Configuration
  2. Inform Google and other Search Engines
  3. Avoid Common Mistakes

We have three different mobile configurations:

  • Responsive Web Design
  • Dynamic Serving
  • Separate URLs

Each mobile configuration has its own pros and cons. Although Google recommends responsive design, it supports all three configurations.

Responsive Web Design

Responsive web designing is the simplest form of all configurations, and it’s very easy to implement. Google highly recommends it. It delivers the same HTML code on the same URL. However, you have to adjust the display based on the screen size of the mobile device.

Dynamic Serving

Dynamic serving is a form of mobile configuration in which the URL of your website remains same, but it serves different HTML content when accessed from a mobile device.

Separate URLs

In this type of configuration, you have to maintain two distinct URLs — one for mobile and another for desktop users. Besides this, make sure you inform the webmaster clearly when to serve which version. Google does not recommend separate URLs because it can detect automatically that your mobile pages are different from your desktop pages.

Inform Google and other Search Engines

You have to ensure that Google and other search engines understand your mobile configuration. Google needs to understand your page so that it can rank your website properly. How you inform Google depends on your mobile configuration type — responsive web design, dynamic serving, or separate URLs — the one you have opted for.

Avoid Common Mistakes

Mobile search algorithms are different from the desktop search. If you don’t want to miss out on mobile searches, then you need to avoid some common mistakes. Following are some most important rules to consider when optimizing for mobile search.

1. Always Use Shorter Key phrases/Keywords

The mobile users more frequently search shorter key phrases or even just a single keyword. Most of the times, the search query is limited to only 1 or 2 words. If your site doesn’t rank well for shorter key phrases, then you will be missing a good amount of mobile traffic.

2. Mobile users majorly search local stuff

Mobile users are mainly searching for local stuff in addition to shorter search key phrases. For example, if any mobile user is standing on a street and she is looking for a place to dine, she is most likely looking for a place in her neighborhood, not in another corner of the world.

3. In Mobile Search, Top 10 Is Actually Top 4

Mobile users hate to scroll down long search pages or hit next. A page containing 10 search results fits on the screen of a desktop, but in a mobile device, it can be split into more than 2 screens. Therefore, in mobile search, it is not Top 10, it is actually Top 4, or even Top 3 because only the first 4 positions are on the first page and they have a higher chance to get traffic. If you want to get mobile traffic, then you have to rank in the Top 4.

4. Don’t forget to promote your mobile-friendly site

You have to submit your site to mobile search engines, mobile portals, and directories. If your visitors come from Google and the other major search engines, it’s great. However, if you want to get more traffic, mobile search engines, mobile portals, and directories are even better. Most of the times, mobile user doesn’t search with Google, but goes into a portal. If your site is listed in this portal, the user will come directly to you from there, not from a search engine.

5. You should avoid Long Pages

Use shorter texts because mobile users don’t prefer to read long pages. (Like I said, mobile searchers don’t like lengthy keyphrases.) This is why, you have to create a shorter mobile version of your site which is easy to read and quick to view. Short pages don’t mean you should skip your keywords. Keywords are also vital for mobile search, so don’t exclude them. That’s where the science comes.

Following is a list of some beneficial tools that you can use to find out how mobile friendly your site is:

Mobile Emulator: It helps you see how your site appears on a wide variety of mobile devices.

Moz Local: If you want to ensure that your local SEO is in order, use this tool.

Responsive Web Design Testing Tool: You can use this tool if you want to see how your responsive site looks like on mobile devices with different standard screen sizes.

Screaming Frog: This tool is very useful if you want to analyze your site and double-check all the redirects.

Test My Site: If you want to find out how well your site works across various mobile devices, use Test My Site. It tests your site’s performance on mobile with Google and sends recommendations for improving performance.


Mobile is the most important considering for SEO in 2017. If you win this algorithm, your business is sure to attract a lot of new visitors and consequently, buyers. From starting with the basics to avoiding the tiniest of mistakes, Mobile SEO is a science in itself. While getting SEO done in-house is a great idea, there’s nothing that can be more convenient than hiring specialists to do it for you. ResultFirst offers progressive Mobile SEO solution to help you attain high ranking in mobile searches & connect with billions of global users.


New coding strategy maximizes data storage capacity of DNA molecules.

Humanity may soon generate more data than hard drives or magnetic tape can handle, a problem that has scientists turning to nature's age-old solution for information-storage -- DNA.

In a new study in Science, a pair of researchers at Columbia University and the New York Genome Center (NYGC) show that an algorithm designed for streaming video on a cellphone can unlock DNA's nearly full storage potential by squeezing more information into its four base nucleotides. They demonstrate that this technology is also extremely reliable. DNA is an ideal storage medium because it's ultra-compact and can last hundreds of thousands of years if kept in a cool, dry place, as demonstrated by the recent recovery of DNA from the bones of a 430,000-year-old human ancestor found in a cave in Spain.

"DNA won't degrade over time like cassette tapes and CDs, and it won't become obsolete -- if it does, we have bigger problems," said study coauthor Yaniv Erlich, a computer science professor at Columbia Engineering, a member of Columbia's Data Science Institute, and a core member of the NYGC.

Erlich and his colleague Dina Zielinski, an associate scientist at NYGC, chose six files to encode, or write, into DNA: a full computer operating system, an 1895 French film, "Arrival of a train at La Ciotat," a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon.

They compressed the files into a master file, and then split the data into short strings of binary code made up of ones and zeros. Using an erasure-correcting algorithm called fountain codes, they randomly packaged the strings into so-called droplets, and mapped the ones and zeros in each droplet to the four nucleotide bases in DNA: A, G, C and T. The algorithm deleted letter combinations known to create errors, and added a barcode to each droplet to help reassemble the files later.

In all, they generated a digital list of 72,000 DNA strands, each 200 bases long, and sent it in a text file to a San Francisco DNA-synthesis startup, Twist Bioscience, that specializes in turning digital data into biological data. Two weeks later, they received a vial holding a speck of DNA molecules.

To retrieve their files, they used modern sequencing technology to read the DNA strands, followed by software to translate the genetic code back into binary. They recovered their files with zero errors, the study reports. (In this short demo, Erlich opens his archived operating system on a virtual machine and plays a game of Minesweeper to celebrate.)

They also demonstrated that a virtually unlimited number of copies of the files could be created with their coding technique by multiplying their DNA sample through polymerase chain reaction (PCR), and that those copies, and even copies of their copies, and so on, could be recovered error-free.

Finally, the researchers show that their coding strategy packs 215 petabytes of data on a single gram of DNA -- 100 times more than methods published by pioneering researchers George Church at Harvard, and Nick Goldman and Ewan Birney at the European Bioinformatics Institute. "We believe this is the highest-density data-storage device ever created," said Erlich.

The capacity of DNA data-storage is theoretically limited to two binary digits for each nucleotide, but the biological constraints of DNA itself and the need to include redundant information to reassemble and read the fragments later reduces its capacity to 1.8 binary digits per nucleotide base.

The team's insight was to apply fountain codes, a technique Erlich remembered from graduate school, to make the reading and writing process more efficient. With their DNA Fountain technique, Erlich and Zielinski pack an average of 1.6 bits into each base nucleotide. That's at least 60 percent more data than previously published methods, and close to the 1.8-bit limit.

Cost still remains a barrier. The researchers spent $7,000 to synthesize the DNA they used to archive their 2 megabytes of data, and another $2,000 to read it. Though the price of DNA sequencing has fallen exponentially, there may not be the same demand for DNA synthesis, says Sri Kosuri, a biochemistry professor at UCLA who was not involved in the study. "Investors may not be willing to risk tons of money to bring costs down," he said.

But the price of DNA synthesis can be vastly reduced if lower-quality molecules are produced, and coding strategies like DNA Fountain are used to fix molecular errors, says Erlich. "We can do more of the heavy lifting on the computer to take the burden off time-intensive molecular coding," he said.


Every SEO professional knows that top search rankings require a properly researched strategy, optimized on-page and off-page SEO factors, engaging content, and long-lasting search value. In other words, breaking into the top ten on Google is by no means, easy.

Given that white hat SEO strategy takes a lot of time, some SEO practitioners still choose to cut corners when optimizing their pages. For obvious reasons, I don’t recommend using any of the black hat SEO techniques. But it can be challenging to tell white hat SEO from its black hat variety, or draw a distinctive line between white hat and gray hat optimization. Search engine optimization is constantly evolving, and some of the techniques that were once legitimate are strictly forbidden now. Let’s set the record straight.

Let’s dive into the six on-page optimization techniques that Google dislikes and could get your website penalized.

#1 Keyword Stuffing

Digital marketers today recognize the importance and value of content marketing. Several years ago, however, marketers could produce tons of thin content with zero value to push their way through to the top of search results. Keyword stuffing (or keyword stacking) was one of the most common content generation methods due to its simple process:

  1. Research search terms you want to rank for (look for exact-match type keywords)
  2. Produce content with a focus on the topic but not too in-depth
  3. Stuff content with keywords (repeat the exact-match keywords and phrases frequently)
  4.  Make sure that meta-tags are also stuffed with keywords
  5. The Google of 2017 favors meaningful and authentic content. It is smart enough to easily detect and penalize sites with thin, low quality, or plagiarized content so marketers should avoid stuffing their content with repetitive keywords.


Embrace a “reader’s first” approach. Develop a habit of creating content that really matters to your targeted audience. Keywords should always come second. Use your main keyword sparingly (2-3 times per 500 words) and include a couple of highly relevant and long-tail keywords to help crawlers identify the value of your content. To improve your marketing strategy, consider using a variety of keywords to get your content noticed.

#2 Spammy Footer Links and Tags

A footer is a must-have element for any website. It helps visitors navigate between multiple website sections and provides access to additional information such as contact info and a copyright license.

It’s no wonder that websites relying on link- and tag-filled footers were penalized. They were hit by two algorithm updates: Panda, which targeted poor website structure, and Penguin, which flagged sites engaging in link and tag manipulation. Avoid spammy footers to achieve a higher search engine ranking position.


When you optimize a site, make sure it has a nice and clean footer that features vital data like contact information, address, working hours, terms of use, copyright license, navigation buttons, subscription field, and more. But spamming the footer is a big no-no.

#3 Cloaking

This old school SEO technique is rooted in the ability to display two separate pieces of content on a single webpage. The first text is “fed” to bots for further crawling, while the second one is showcased to actual readers. What did cloaking accomplish?

1. Manipulate search engines to receive higher SERPs

2. Allow a specific page to rank high while remaining easy for users to read

3. Deliver “unrelated content” for common requests (e.g. you search “cute little puppies” but end up on a porn site)

Cloaking is an advanced black hat SEO method. To harness its true power, you should be able to identify search engine spiders by IP and deliver different webpage content to them. This also requires abusing specific server-side scripts and “behind the scenes” code.

Cloaking is a tactic that was marked as “black hat” years ago. Google has never found it legitimate, but nonetheless, it is widely used in the so-called DeepWeb.


This one is simple: avoid cloaking. Don’t risk your reputation for a quick SERP ranking.

#4 Internal Linking With Keyword-rich Anchor Text

Internal linking is not always a bad thing. Ideally, linking allows you to connect web pages to create properly structured “paths” for search crawlers to index your site from A to Z. But marketers must walk a fine line when it comes to internal linking.

When you repeatedly link your site’s inner pages with keyword-rich anchor texts, this raises a big red flag to Google. You can risk being hit with an over-optimization penalty.


Google puts value first. Make sure the word “value” dominates your content creation process as well. The same is true for links: place them only where they bring real value to users. And don’t forget to use different anchor texts for your inner links.

#5 Dedicated Pages for Every Keyword Variant

For a long time, keywords played a critical role in how your pages ranked. Though their importance is on the decline, keywords are by no means dead. These days, you can’t stuff your site with multiple variations of targeted keywords to boost rankings. Instead, focus on:

  1. Proper keyword selection
  2. Topic and context
  3. User intent

You must be careful and avoid over-optimizing your websites. Remember: Panda, Hummingbird, and RankBrain are always looking for sites that are abusing keywords.

In old-school SEO practices, over-optimization was a must. In addition, acquiring a high search engine ranking position required the creation of specific pages for every keyword variant.

Suppose you wanted to rank your website that sold “custom rings.” Your win-win strategy would be to create dedicated pages for “custom unique rings,” “custom personalized rings,” custom design rings,” and every other variation involving the words “custom” and “rings.”

The idea was pretty straightforward: marketers would target all keywords individually to dominate search by every keyword variation. You sacrifice usability but drive tons of traffic.

This on-page SEO method was 100% legitimate several years ago but if you were to try this strategy today, you run the risk of receiving a manual action penalty.


To avoid a manual action penalty, don’t create separate pages for each particular keyword variant. Google’s “rules” for calculating SERPs have significantly changed, rendering old-time strategies obsolete. Utilizing the advanced power of its search algorithms and RankBrain, Google now prioritizes sites that bring actual value to users. Your best strategy to reach the top is to create visually and structurally appealing landing pages for your products and services, post high-quality, SEO-optimized content in your blog, and build a loyal following on social media platforms.

#6 Content Swapping

Google has been a powerhouse for more than a decade. They employed thousands of people and invested billions of dollars into their research and development departments, but their search engine lagged in intelligence for several years.

SEO strategies previously could easily trick Google. For example, content swapping was one way to manipulate Google’s algorithms and indexing protocols.

It worked like this:

  • Post a piece of content on a site
  • Wait until Google bots crawl and index it
  • Make sure that a site is displayed in the search
  • Close a page or entire site from indexation
  • Swap content

This would result in a page that originally featured an article about tobacco pipes but ended up with swapped content on prohibited medications or banned substances.

Content swapping has always been forbidden, but advanced black hat SEO professionals used the technique because Google wasn’t quick enough to reindex sites.

Now with changes in the SEO landscape, Google can almost immediately cut the search ranking of any site that’s closed from indexation. This is exactly why content swapping is no longer a viable strategy as it can result in significant penalties.


Content is king. If you want to hit Google’s top, prioritize creating dense content that provides as much information as possible and adds value to your audience. Google hates sites that manipulate content, so stay away from content swapping. Conclusion For better or worse, on-page SEO never stays still. Launching one algorithm update after another, Google is constantly pushing SEO experts and digital marketers to find new opportunities to improve their rankings via organic search.

Google drives innovation forward, making some of the most popular on-page optimization techniques of the past outdated. No longer can marketers rely on keyword stuffing, thin content, and the myriad of other grey and black hat on-page SEO methods.

SEO strategies of the past were technical and manipulative but lacked long-term sustainability. A true SEO pro could easily trick Google by stuffing content with keywords and paid links. The only problem was, it didn’t bring any real value to customers


It seems that it isn’t just users who are annoyed at the prevalence of fake news in search results and social media these days. Google has made some new and very specific changes to their Quality Rater Guidelines to target fake news, as well as other types of sites including hate sites, “monstrously inaccurate” information sites such as science medical denials, other types of sites that are offensive to searchers.

Additions to Low Quality

Specifically, Google is designating the following as low quality:

  • Sites that mimic other well known sites, including news sites
  • Sites that present themselves as news sites but contain factually inaccurate content meant to benefit a “person, business, government, or other organization politically, monetarily or otherwise”
  • Sites that deliberately misinform or deceive users by presenting factually inaccurate content
  • Sites with unsubstantiated conspiracy theories or hoaxes, presented as if it were factual
  • Pages or websites presenting dubious scientific facts
  • Site that promote hate crimes or violence against a group of people.

Impact of Fake News in Google Search Results

Paul Haahr, a ranking engineer at Google, about the quality rater guideline changes and he revealed an interesting statistic about “fake news” type of results. Only a very small percentage of traffic ends up seeing these types of results in the search results. According to Haahr, only a very small fraction of traffic is affected, as only 0.1% of traffic touches these areas. This is a surprisingly small number which seems disproportionate to the amount of publicity these types of results get in the news media.

News Now Considered Your Money or Your Life

One of the more important changes for news sites are now considered to be “Your Money or Your Life” – these are the types of sites that are held above and beyond merely high quality, as they impact a person’s wellbeing, health or life. And they also require the highest E-A-T (expertise, authoratativeness, and trustworthiness).

  • High quality news articles should contain factually accurate content presented in a way that helps users achieve a better understanding of events. Established editorial policies and review processes are typically held by high quality news sources
  • High quality information pages on scientific topics should represent wellestablished scientific consensus on issues where such consensus exists.

It is also clear that Google wants raters to have clear guidelines on rating this type of content low – without the explicit instructions that fake news and hate sites are considered low quality, the accuracy of any algo changes they test with the raters might not be that clear.

Using New Guidelines to Test New Algos

Haahr said that they needed to make these specific changes to the guidelines in order to have training data from the raters. And the need for training data would mean they are looking for ways to algorithmically detect and downrank sites that fall into the categories of fake news, hate sites or other sites with dubious and unbacked theories or claims. Once raters know these sites should always be marked as low quality, they can then test algo changes targeting it specifically with the quality raters. With the changes being the addition of hate crimes and hate sites targeted to any specific group, Haahr said that raters do not have to rate any of these types of sites. They also tried to be thoughtful of the examples they used in their guidelines to avoid upsetting anyone.


Monday, 13 March 2017 03:58

Photoshop's Role in a Web Design Workflow

Written by

The web has undergone some serious changes in recent years and the way in which the web is designed is changing along with it. Photoshop may still be the "go-to" tool for many web designers, but for some, Photoshop is no longer king.

What is it Good For?

Technically speaking, Photoshop is an application for manipulating imagery but it's also packed with tools for building graphics from scratch. Shape, vector, type, fills and effects, all of these (and more) lend themselves very well to constructing graphic layouts.

Not too long ago, web browsers were incapable of directly generating these kinds of effects themselves, but they could display bitmap images perfectly well. In order to explore graphic design within a browser it was only logical to reach for Photoshop, create your visuals, save them as images and use them within a web page.

Gradients, shadows, patterns, angles; all easy to create with Photoshop's tools - not too easy to create with anything else.

Building for the web was also relatively complex (far less streamlined than nowadays) so mocking up a layout in Photoshop was always going to be easier and quicker than battling with Dreamweaver. Why wouldn't you design in a graphics application, get approval from the client, then actually build for the web? Today's designers have grown up using Photoshop because it offered the quickest way to visualize a hi-fidelity design concept.

The Legacy of Print Design

Back when the web was an emerging medium there were no "web agencies", so the task of crafting it fell to print designers. These guys did what was logical; they took their digital print design experience, values, techniques, processes and tools, then applied them to this brave new world.

All that needed altering was the final output. As such, Photoshop witnessed the changes and went along for the ride, further rooting itself as the graphic designer's best friend.

What are its Limitations?

Times they are a-changin' (as Bob Dylan said). The web is a different place these days and Photoshop's role in the process of designing for that web is also changing. Why? A big part of the issue lies in technological advancements which have driven huge change in web design over recent years. We've seen the internet grow from a library of static documents to an interactive pool of services and applications. Network providers have spread their fingers into almost every corner of the globe, bandwidth and speed have increased to science fiction-like levels. Internet enabled devices such as smartphones, tablets, even watches, are manufactured affordably and rapidly. All of this has revolutionized the way in which we use the web - and it's drastically altered our perception of how we should be designing for it.

A Fluid Web

Print designers begin with constraints (the fixed dimensions of a page) then design within them. When first designing for the web this was also a logical starting point; establish a fixed canvas and work inwards. To figure out what those fixed dimensions should be, designers had to make assumptions about end user screen sizes. Very early on 800x600px was most common. Later, 1024x800px was the norm. Working to a grid of 960px made sense because it fit most screens (larger screens were rare, owners of smaller screens would just have to upgrade eventually) and was divisible by a range of column widths. These assumptions were wrong then (forcing a user to adjust their browsing to your design?!) and are even more so these days. How big is a web page today?

Photoshop inherently works to fixed boundaries. Shapes, type and objects within its layouts are fixed, whereas web pages increasingly aren't. Producing a comp to present to a client used to be quickly achieved in Photoshop, but how can you effectively present a fluid layout as a static snapshot?


Friday, 10 March 2017 10:15

Announcement of Office Relocation

Written by

Manomaya has come long way since its inception in 2010. After creating a niche for ourselves and having served Client from across the globe, We are expanding our team and office space for more and more projects and endeavor to do in future.

No matter what business you run or which field you are striving hard in,it feels good to when you expand your team and go for bigger office room area. Let’s be honest, it feels great.

We, at Manomaya, are feeling that delight and proud of moving our office to new Tech park called Center for Technology Innovation and Entrepreneurship – CTIE,Hubli. So on the opening occasion of our new office space we wanted to have good memories so we enjoyed lot and captured few of them. Take a look at some of the pics…


Friday, 10 March 2017 05:47

How computers can learn better

Written by

How computers can learn better With a recently released programming framework, researchers show that a new machine-learning algorithm outperforms its predecessors.

Reinforcement learning is a technique, common in computer science, in which a computer system learns how best to solve some problem through trial-and-error. Classic applications of reinforcement learning involve problems as diverse as robot navigation, network administration and automated surveillance.

At the Association for Uncertainty in Artificial Intelligence’s annual conference this summer, researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and Computer Science and Artificial Intelligence Laboratory will present a new reinforcement-learning algorithm that, for a wide range of problems, allows computer systems to find solutions much more efficiently than previous algorithms did.

The paper also represents the first application of a new programming framework that the researchers developed, which makes it much easier to set up and run reinforcement-learning experiments. Alborz Geramifard, a LIDS postdoc and first author of the new paper, hopes that the software, dubbed RLPy (for reinforcement learning and Python, the programming language it uses), will allow researchers to more efficiently test new algorithms and compare algorithms’ performance on different tasks. It could also be a useful tool for teaching computer-science students about the principles of reinforcement learning.

Geramifard developed RLPy with Robert Klein, a master’s student in MIT’s Department of Aeronautics and Astronautics.Every reinforcement-learning experiment involves what’s called an agent, which in artificial-intelligence research is often a computer system being trained to perform some task. The agent might be a robot learning to navigate its environment, or a software agent learning how to automatically manage a computer network. The agent has reliable information about the current state of some system: The robot might know where it is in a room, while the network administrator might know which computers in the network are operational and which have shut down. But there’s some information the agent is missing — what obstacles the room contains, for instance, or how computational tasks are divided up among the computers.

Finally, the experiment involves a “reward function,” a quantitative measure of the progress the agent is making on its task. That measure could be positive or negative: The network administrator, for instance, could be rewarded for every failed computer it gets up and running but penalized for every computer that goes down.

The goal of the experiment is for the agent to learn a set of policies that will maximize its reward, given any state of the system. Part of that process is to evaluate each new policy over as many states as possible. But exhaustively canvassing all of the system’s states could be prohibitively time-consuming.

Consider, for instance, the network-administration problem. Suppose that the administrator has observed that in several cases, rebooting just a few computers restored the whole network. Is that a generally applicable solution?

One way to answer that question would be to evaluate every possible failure state of the network. But even for a network of only 20 machines, each of which has only two possible states — working or not — that would mean canvassing a million possibilities. Faced with such a combinatorial explosion, a standard approach in reinforcement learning is to try to identify a set of system “features” that approximate a much larger number of states. For instance, it might turn out that when computers 12 and 17 are down, it rarely matters how many other computers have failed: A particular reboot policy will almost always work. The failure of 12 and 17 thus stands in for the failure of 12, 17 and 1; of 12, 17, 1 and 2; of 12, 17 and 2, and so on.

Geramifard — along with Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, Thomas Walsh, a postdoc in How’s lab, and Nicholas Roy, an associate professor of aeronautics and astronautics — developed a new technique for identifying pertinent features in reinforcement-learning tasks. The algorithm first builds a data structure known as a tree — kind of like a family-tree diagram — that represents different combinations of features. In the case of the network problem, the top layer of the tree would be individual machines, the next layer would be combinations of two machines, the third layer would be combinations of three machines, and so on.

The algorithm then begins investigating the tree, determining which combinations of features dictate a policy’s success or failure. The relatively simple key to its efficiency is that when it notices that certain combinations consistently yield the same outcome, it stops exploring them. For instance, if it notices that same policy seems to work whenever machines 12 and 17 have failed, it stops considering combinations that include 12 and 17 and begins looking for others.

Geramifard believes that this approach captures something about how human beings learn to perform new tasks. “If you teach a small child what a horse is, at first it might think that everything with four legs is a horse,” he says. “But when you show it a cow, it learns to look for a different feature — say, horns.” In the same way, Geramifard explains, the new algorithm identifies an initial feature on which to base judgments and then looks for complementary features that can refine the initial judgment. RLPy allowed the researchers to quickly test their new algorithm against a number of others. “Think of it as like a Lego set,” Geramifard says. “You can snap one module out and snap another one in its place.”

In particular, RLPy comes with a number of standard modules that represent different machine-learning algorithms; different problems (such as the network-administration problem, some standard control-theory problems that involve balancing pendulums, and some standard surveillance problems); different techniques for modeling the computer system’s environment; and different types of agents.

It also allows anyone familiar with the Python programming language to build new modules. They just have to be able to hook up with existing modules in prescribed ways.

Geramifard and his colleagues found that in computer simulations, their new algorithm evaluated policies more efficiently than its predecessors, arriving at more reliable predictions in one-fifth the time.

RLPy can be used to set up experiments that involve computer simulations, such as those that the MIT researchers evaluated, but it can also be used to set up experiments that collect data from real-world interactions. In one ongoing project, for instance, Geramifard and his colleagues plan to use RLPy to run an experiment involving an autonomous vehicle learning to navigate its environment. In the project’s initial stages, however, he’s using simulations to begin building a battery of reasonably good policies. “While it’s learning, you don’t want to run it into a wall and wreck your equipment,” he says.


Wednesday, 08 March 2017 05:38

About International Women's Day (8 March)

Written by

International Women's Day (March 8) is a global day celebrating the social, economic, cultural and political achievements of women. The day also marks a call to action for accelerating gender parity.

International Women's Day (IWD) has been observed since in the early 1900's - a time of great expansion and turbulence in the industrialized world that saw booming population growth and the rise of radical ideologies. International Women's Day is a collective day of global celebration and a call for gender parity. No one government, NGO, charity, corporation, academic institution, women's network or media hub is solely responsible for International Women's Day. Many organizations declare an annual IWD theme that supports their specific agenda or cause, and some of these are adopted more widely with relevance than others.

"The story of women's struggle for equality belongs to no single feminist nor to any one organization but to the collective efforts of all who care about human rights," says world-renowned feminist, journalist and social and political activist Gloria Steinem. Thus International Women's Day is all about unity, celebration, reflection, advocacy and action - whatever that looks like globally at a local level. But one thing is for sure, International Women's Day has been occurring for well over a century - and continue's to grow from strength to strength.

International Women's Day timeline journey


Great unrest and critical debate was occurring amongst women. Women's oppression and inequality was spurring women to become more vocal and active in campaigning for change. Then in 1908, 15,000 women marched through New York City demanding shorter hours, better pay and voting rights.


In accordance with a declaration by the Socialist Party of America, the first National Woman's Day (NWD) was observed across the United States on 28 February. Women continued to celebrate NWD on the last Sunday of February until 1913.


In 1910 a second International Conference of Working Women was held in Copenhagen. A woman named Clara Zetkin (Leader of the 'Women's Office' for the Social Democratic Party in Germany) tabled the idea of an International Women's Day. She proposed that every year in every country there should be a celebration on the same day - a Women's Day - to press for their demands. The conference of over 100 women from 17 countries, representing unions, socialist parties, working women's clubs - and including the first three women elected to the Finnish parliament - greeted Zetkin's suggestion with unanimous approval and thus International Women's Day was the result.


Following the decision agreed at Copenhagen in 1911, International Women's Day was honoured the first time in Austria, Denmark, Germany and Switzerland on 19 March. More than one million women and men attended IWD rallies campaigning for women's rights to work, vote, be trained, to hold public office and end discrimination. However less than a week later on 25 March, the tragic 'Triangle Fire' in New York City took the lives of more than 140 working women, most of them Italian and Jewish immigrants. This disastrous event drew significant attention to working conditions and labour legislation in the United States that became a focus of subsequent International Women's Day events. 1911 also saw women's Bread and Roses' campaign.


On the eve of World War I campaigning for peace, Russian women observed their first International Women's Day on the last Sunday in February 1913. In 1913 following discussions, International Women's Day was transferred to 8 March and this day has remained the global date for International Women's Day ever since. In 1914 further women across Europe held rallies to campaign against the war and to express women's solidarity. For example, in London in the United Kingdom there was a march from Bow to Trafalgar Square in support of women's suffrage on 8 March 1914. Sylvia Pankhurst was arrested in front of Charing Cross station on her way to speak in Trafalgar Square.


On the last Sunday of February, Russian women began a strike for "bread and peace" in response to the death of over 2 million Russian soldiers in World War 1. Opposed by political leaders, the women continued to strike until four days later the Czar was forced to abdicate and the provisional Government granted women the right to vote. The date the women's strike commenced was Sunday 23 February on the Julian calendar then in use in Russia. This day on the Gregorian calendar in use elsewhere was 8 March.


International Women's Day was celebrated for the first time by the United Nations in 1975. Then in December 1977, the General Assembly adopted a resolution proclaiming a United Nations Day for Women’s Rights and International Peace to be observed on any day of the year by Member States, in accordance with their historical and national traditions.


The UN commenced the adoption of an annual theme in 1996 - which was "Celebrating the past, Planning for the Future". This theme was followed in 1997 with "Women at the Peace table", and in 1998 with "Women and Human Rights", and in 1999 with "World Free of Violence Against Women", and so on each year until the current. More recent themes have included, for example, "Empower Rural Women, End Poverty & Hunger" and "A Promise is a Promise - Time for Action to End Violence Against Women".


By the new millennium, International Women's Day activity around the world had stalled in many countries. The world had moved on and feminism wasn't a popular topic. International Women's Day needed re-ignition. There was urgent work to do - battles had not been won and gender parity had still not been achieved.


The global digital hub for everything IWD was launched to re-energize the day as an important platform to celebrate the successful achievements of women and to continue calls for accelerating gender parity. Each year the IWD website sees vast traffic and is used by millions of people and organizations all over the world to learn about and share IWD activity. The IWD website is made possible each year through support from corporations committed to driving gender parity. The website's charity of choice for many years has been the World Association of Girl Guides and Girl Scouts (WAGGGS) whereby IWD fundraising is channelled. A more recent additional charity partnership is with global working women's organization Catalyst Inc. The IWD website adopts an annual theme that is globally relevant for groups and organizations. This theme, one of many around the world, provides a framework and direction for annual IWD activity and takes into account the wider agenda of both celebration as well as a broad call to action for gender parity. Recent themes have included "Pledge for Parity", "Make it happen", "The Gender Agenda: Gaining Momentum" and "Connecting Girls, Inspiring Futures". Themes for the global IWD website are collaboratively and consultatively identified each year and widely adopted.


2011 saw the 100 year centenary of International Women's Day - with the first IWD event held exactly 100 years ago in 1911 in Austria, Denmark, Germany and Switzerland. In the United States, President Barack Obama proclaimed March 2011 to be "Women's History Month", calling Americans to mark IWD by reflecting on "the extraordinary accomplishments of women" in shaping the country's history. The then Secretary of State Hillary Clinton launched the "100 Women Initiative: Empowering Women and Girls through International Exchanges". In the United Kingdom, celebrity activist Annie Lennox lead a superb march across one of London's iconic bridges raising awareness in support for global charity Women for Women International. Further charities such as Oxfam have run extensive activity supporting IWD and many celebrities and business leaders also actively support the day

2017 and beyond

The world has witnessed a significant change and attitudinal shift in both women's and society's thoughts about women's equality and emancipation. Many from a younger generation may feel that 'all the battles have been won for women' while many feminists from the 1970's know only too well the longevity and ingrained complexity of patriarchy. With more women in the boardroom, greater equality in legislative rights, and an increased critical mass of women's visibility as impressive role models in every aspect of life, one could think that women have gained true equality. The unfortunate fact is that women are still not paid equally to that of their male counterparts, women still are not present in equal numbers in business or politics, and globally women's education, health and the violence against them is worse than that of men. However, great improvements have been made. We do have female astronauts and prime ministers, school girls are welcomed into university, women can work and have a family, women have real choices. And so each year the world inspires women and celebrates their achievements. IWD is an official holiday in many countries including Afghanistan, Armenia, Azerbaijan, Belarus, Burkina Faso, Cambodia, China (for women only), Cuba, Georgia, Guinea-Bissau, Eritrea, Kazakhstan, Kyrgyzstan, Laos, Madagascar (for women only), Moldova, Mongolia, Montenegro, Nepal (for women only), Russia, Tajikistan, Turkmenistan, Uganda, Ukraine, Uzbekistan, Vietnam and Zambia. The tradition sees men honouring their mothers, wives, girlfriends, colleagues, etc with flowers and small gifts. In some countries IWD has the equivalent status of Mother's Day where children give small presents to their mothers and grandmothers.

A global web of rich and diverse local activity connects women from all around the world ranging from political rallies, business conferences, government activities and networking events through to local women's craft markets, theatric performances, fashion parades and more. Many global corporations actively support IWD by running their own events and campaigns. For example, on 8 March search engine and media giant Google often changes its Google Doodle on its global search pages to honor IWD. Year on year IWD is certainly increasing in status.

So make a difference, think globally and act locally!

Make everyday International Women's Day.

Do your bit to ensure that the future for girls is bright, equal, safe and rewarding.


When everything we use is networked, we're all going to need more bandwidth.

Think about how annoyed you get when you lose your cell signal, and you can see why Intel is pushing for advances in the next generation of networking, also known as 5G. Sure, the company stands to profit from making chips and networking equipment to support faster broadband. Consumers, too, stand to benefit from a future where more things in their lives are connected. To get there, though, we’re all going to need more bandwidth.

At Mobile World Congress, Intel demonstrated several initiatives for developing 5G capabilities. Watching virtual reality’s often stuttery video can make people queasy, but Intel demonstrated how 5G could let you stream 8K VR content. The company also showed how self-driving cars will need a speedy 5G network to communicate with other cars and infrastructure so they can move safely. Even in a smart home, think of the sheer quantity of things being connected, from TVs and smartphones (plus their streaming content) to window shades and even coffee pots. Multiply this to entire neighborhoods, and you realize that we’ll need 5G, or we’ll all be arguing over who’s hogging the internet.

When you think about how close the demos are to reality—how many people already use smart-home technology, how fast self-driving technology is advancing—you can see why Intel’s in such a hurry to make this happen. While full 5G is still some years away, Julie Coppernoll, Intel’s VP of Client Computing, says to keep an eye on 2020 for major developments around the Olympics. That’s feeling very close.


Friday, 03 March 2017 05:01

What are On-Page Factors?

Written by

There are several on-page factors that affect search engine rankings. These include:

Content of Page

The content of a page is what makes it worthy of a search result position. It is what the user came to see and is thus extremely important to the search engines. As such, it is important to create good content. So what is good content? From an SEO perspective, all good content has two attributes. Good content must supply a demand and must be linkable.

Good content supplies a demand:

Just like the world’s markets, information is affected by supply and demand. The best content is that which does the best job of supplying the largest demand. It might take the form of an XKCD comic that is supplying nerd jokes to a large group of technologists or it might be a Wikipedia article that explains to the world the definition of Web 2.0. It can be a video, an image, a sound, or text, but it must supply a demand in order to be considered good content.

Good content is linkable:

From an SEO perspective, there is no difference between the best and worst content on the Internet if it is not linkable. If people can’t link to it, search engines will be very unlikely to rank it, and as a result the content won’t drive traffic to the given website. Unfortunately, this happens a lot more often than one might think. A few examples of this include: AJAX-powered image slide shows, content only accessible after logging in, and content that can't be reproduced or shared. Content that doesn't supply a demand or is not linkable is bad in the eyes of the search engines—and most likely some people, too.

Title Tag

Title tags are the second most important on-page factor for SEO, after content.


Along with smart internal linking, SEOs should make sure that the category hierarchy of the given website is reflected in URLs.

The following is a good example of URL structure:

This URL clearly shows the hierarchy of the information on the page (history as it pertains to video games in the context of games in general). This information is used to determine the relevancy of a given web page by the search engines. Due to the hierarchy, the engines can deduce that the page likely doesn’t pertain to history in general but rather to that of the history of video games. This makes it an ideal candidate for search results related to video game history. All of this information can be speculated on without even needing to process the content on the page.

The following is a bad example of URL structure:

Unlike the first example, this URL does not reflect the information hierarchy of the website. Search engines can see that the given page relates to titles (/title/) and is on the IMDB domain but cannot determine what the page is about. The reference to “tt0468569” does not directly infer anything that a web surfer is likely to search for. This means that the information provided by the URL is of very little value to search engines.

URL structure is important because it helps the search engines to understand relative importance and adds a helpful relevancy metric to the given page. It is also helpful from an anchor text perspective because people are more likely to link with the relevant word or phrase if the keywords are included in the URL.



Page 1 of 17

About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions