All individuals are unique but millions of people share names. How to distinguish -- or as it is technically known, disambiguate -- people with common names and determine which John Smith or Maria Garcia or Wei Zhang or Omar Ali is a specific John Smith, Maria Garcia, Wei Zhang or Omar Ali -- or even someone previously unidentified?

This conundrum occurs in a wide range of environments from the bibliographic -- which Anna Hernandez authored a specific study? -- to the law enforcement -- which Robert Jones is attempting to board an airplane flight?

Two computer scientists from the School of Science at Indiana University-Purdue University Indianapolis and a Purdue University doctoral student have developed a novel-machine learning method to provide better solutions to this perplexing problem. They report that the new method is an improvement on currently existing approaches of name disambiguation because the IUPUI method works on streaming data that enables the identification of previously unencountered John Smiths, Maria Garcias, Wei Zhangs and Omar Alis.

Existing methods can disambiguate an individual only if the person\'s records are present in machine-learning training data, whereas the new method can perform non-exhaustive classification so that it can detect the fact that a new record which appears in streaming data actually belongs to a fourth John Smith, even if the training data has records of only three different John Smiths. \"Non-exhaustiveness\" is a very important aspect for name disambiguation because training data can never be exhaustive, because it is impossible to include records of all living John Smiths.

\"Bayesian Non-Exhaustive Classification -- A Case Study: Online Name Disambiguation using Temporal Record Streams\" by Baichuan Zhang, Murat Dundar and Mohammad al Hasan is published in Proceedings of the 25th International Conference on Information and Knowledge Management. Zhang is a Purdue graduate student. Dundar and Hasan are IUPUI associate professors of computer science and experts in machine learning.

\"We looked at a problem applicable to scientific bibliographies using features like keywords, and co-authors, but our disambiguation work has many other real-life applications -- in the security field, for example,\" said Hasan, who led the study. \"We can teach the computer to recognize names and disambiguate information accumulated from a variety of sources -- Facebook, Twitter and blog posts, public records and other documents -- by collecting features such as Facebook friends and keywords from people\'s posts using the identical algorithm. Our proposed method is scalable and will be able to group records belonging to a unique person even if thousands of people have the same name, an extremely complicated task.

\"Our innovative machine-learning model can perform name disambiguation in an online setting instantaneously and, importantly, in a non-exhaustive fashion,\" Hasan said. \" Our method grows and changes when new persons appear, enabling us to recognize the ever-growing number of individuals whose records were not previously encountered. Also, some names are more common than others, so the number of individuals sharing that name grows faster than other names. While working in non-exhaustive setting, our model automatically detects such names and adjusts the model parameters accordingly.\"

Machine learning employs algorithms -- sets of steps -- to train computers to classify records belonging to different classes. Algorithms are developed to review data, to learn patterns or features from the data, and to enable the computer to learn a model that encodes the relationship between patterns and classes so that future records can be correctly classified. In the new study, for a given name value, computers were \"trained\" by using records of different individuals with that name to build a model that distinguishes between individuals with that name, even individuals about whom information had not been included in the training data previously provided to the computer.

\"Features\" are bits of information with some degree of predictive power to define a specific individual. The researchers focused on three types of features:

1. Relational or association features to reveal persons with whom an individual is associated: for example, relatives, friends, and colleagues

2. Text features, such as keywords in documents: for example, repeated use of sports- culinary-, or terrorism-associated keywords

3. Venue features: for example, institutions, memberships or events with which an individual is currently or was formerly associated

The study was funded by the National Science Foundation through CAREER awards to Hasan and Dundar in 2012 and 2013, respectively.

The researchers hope to continue this line of inquiry, scaling up with the support of enhanced technologies, including distributed computing platforms.


Designing a new website can be a daunting process, only made more complicated by the volume of information that sometimes needs to be organized and incorporated. Sure, designers might create wireframes and mockups to plan out the site before they get started, but what about non-designers? How do you get the structure of the site figured out before you turn everything over to your designer? A sitemap can be an effective planning tool for both designers and non-designers alike. It’s a centralized planning tool that can help organize and clarify the content that needs to be on your site, as well as help you eliminate unnecessary pages. And a sitemap, because it’s basically just an outline or flow-chart of the content your site needs, can be created by anyone, regardless of their design skills. Read on for more reasons why a sitemap should be the starting point for your new website design.

Clarify Your Site’s Purpose and Goals

Every website should have a goal and a purpose. Sites without these are often unfocused, hard to navigate, and present poor user experiences. The visitor is left wondering, “what am I supposed to be doing here?” You never want your visitor to be confused when navigating your website or interacting with your content.

A sitemap can help you clarify what your site’s goals are before you start designing or creating content. By deciding exactly what you want from your site and then mapping it out, you can ensure that every part of your website is reinforcing your goals. Then it’s possible to cut parts that aren’t directly tied to the site’s purpose before they become an integral part of the site’s architecture.

Helpful Hint #1: All too often companies start with a “business card” website and later “Frankenstein” together functionality based off impulsive decision making. Instead, take a step back before the first company site is launched and determine the goals of the website. This can save an enormous amount of time, money, energy and resources. Here are some web design goal worksheets to help you think through your website goals.

Avoid Duplicate Content

Duplicating content on your website is a waste of time and resources. If you’ve already included something on one page, why not just link to that page from another place that needs the same information?

If you don’t have a sitemap, you may not realize you’re duplicating content. You’ll just create pages as you need them, without tracking what’s already been created. This can eventually lead to conflicting information on your site, as one page is updated but another is not. Simplify things by making sure duplicate content is combined into a single page, linked to from wherever the content needs to be referenced.

Helpful Hint #2: Duplicate content can create a situation where the search engines arbitrarily choose what they deem is the most important page between two similar pages. Don’t let this happen. They may choose to ignore a page that is designed to convert, and instead, index a similar page that doesn’t. Read this SEOmoz article for information on how to avoid duplicate content.

Streamline Your Conversion Funnel

You want the minimum number of steps from point A to point B in your conversion funnel. The more steps, the more chance a visitor has to leave the site without completing their purchase or signup.

Use your sitemap to figure out what the necessary steps are, and to combine steps where possible. A visual representation, like a flowchart, can make streamlining your funnel easier. Try one after you’ve got a sitemap drafted to ensure you aren’t adding extra steps anywhere.

Helpful Hint #3: Use services like Kissmetrics to analyze your conversion funnels. By setting up conversion funnel software before a website goes live, you’ll be able to immediately test and determine the most optimal sign up and selling processes. And obviously the sooner you are able to accurately track your conversion funnels, the more revenue you can earn starting day one. 

Get Everyone On the Same Page

Rarely are websites built by a single person with no outside input. There may be a designer, a project manager, a developer or two, a copywriter or content creator, and someone from marketing or sales involved, and sometimes even more people than that. A sitemap makes sure everyone involved in the project is on the same page.

Your sitemap should be kept in a format that is accessible to everyone working on the project, and should be kept in a central location where those people can view it (and any changes made to it). Your sitemap isn’t a static document, and it’s likely changes will be made as the project progresses. The sitemap can serve as a central clearing house for tracking your project, what’s been completed, what still needs work, and what progress is being made.

Helpful Hint #4: Getting everyone on the same page is helpful for designing your company site. More importantly it reveals an important secret of how your company should operate. Having everyone in your company in alignment with your company’s core values, mission statement and high level goals has been found to lead to the highest chance of start-up success.


Without a sitemap, you may spend a lot of time creating unnecessary pages, or designing sites that are more complicated than they need to be. It’s worth taking an afternoon to sit down with the team responsible for creating your site’s content and figure out what’s necessary, how pages are interrelated, and what can be cut from your site, before you start designing. Remember, it’s less expensive (in terms of both time and money) to add or eliminate something in the early stages than to have to do so when your site is nearly complete.


“What is rooting? Why should I root my Android device?” These are common questions that I get asked quite often. Today’s lesson is to talk to you about both the advantages and disadvantages of rooting your Android devices. But before we get started, a word of caution: rooting or modifying your phone in any way will void your manufacturer’s warranty and possibly “brick” it. What does “bricking” your device mean you ask? Exactly what you think… It means screwing up your phone software so badly that your phone can no longer function properly and is pretty much as useless as a brick. I do not in any way recommend anyone to root their Android device. This article is simply to introduce you to the subject of rooting and present you with both the pro’s and con’s so that you can make an educated decision on your own.

What is Rooting?

“Rooting” your device means obtaining “superuser” rights and permissions to your Android’s software. With these elevated user privileges, you gain the ability to load custom software (ROM’s), install custom themes, increase performance, increase battery life, and the ability to install software that would otherwise cost extra money (ex: WiFi tethering). Rooting is essentially “hacking” your Android device. In the iPhone world, this would be the equivalent to “Jailbreaking” your phone.

Why is it called Rooting?

The term “root” comes from the Unix/Linux world and is used to describe a user who has “superuser” rights or permissions to all the files and programs in the software OS (Operating System). The root user, because they have “superuser” privileges, can essentially change or modify any of the software code on the device. You see, your phone manufacturer/carrier only gives you “guest” privileges when you purchase your device. They do this for good reason… they don’t want you getting into certain parts of the software on your phone and screwing it up beyond repair. It makes it much easier for them to manage and update the devices if they lock it all down. This way, all the users are running the same unmodified version of the phone’s software. This makes it much easier for them to support the devices. But, for the tech-savvy crowd, only having “guest” privileges on your device is pretty lame and it locks down a lot of potentially useful features.

What are the Advantages of Rooting?

Custom Software (ROM’s)

You may have heard of people loading custom “ROM’s” on their devices. A “ROM” is the software that runs your device. It is stored in the “Read Only Memory” of your device. There are many great custom ROM’s available that can make your Android device look and perform drastically different. For instance, you might be stuck with an older Android device that is stuck on an older version of the Android OS and it is not getting any of the newer updated versions of Android. With a custom ROM, you could load up the latest and greatest available Android versions and bring that antiquated device up to par with some of the newer ones. There are lots of great ROM’s available for many different phones and it is up to you to find the one that best meets your needs. The best place that I have found to find custom ROM’s for Android devices is the XDA Developers Forums. The XDA community is filled with smartphone enthusiasts and developers for the Android platform. Check them out and see if you find any ROM’s that would meet your needs.

 Custom Themes

Themes are basically the graphics that appear on your Android device. Rooting your device allows you the ability to fully customize just about every graphic on your device. You can load custom themes that totally change the look and feel of your device.

Kernel, speed, and battery

There are many custom ROM’s and apps available for rooted devices that will allow you to drastically improve the performance (speed) and also extend battery life on your device. A lot of developers tweak the kernels (layer of code that handles communication between the hardware and software) for added performance, battery life, and more.


Rooting your device grants you the ability to update the Basebands on your smartphone. The Baseband is what controls the radio on your device. By updating to the latest Basebands, you can potentially improve both the signal and quality of your phone calls.

Latest Versions of Android

As mentioned earlier, custom ROM’s can allow you to update to the latest version of the Android OS before they are officially released. This is a great feature for those who are tech-savvy and want to stay on top of the latest and greatest software updates before it hits the mainstream crowd. This is also useful if you have an outdated device that is no longer being updated by the manufacturer.

Backing up your device

The ability to easily backup all of your Apps and Data is one feature that is sorely missed on the stock build of Android devices. But if you root your device, backing up everything on your device (both apps and data) becomes a simple task. Titanium Backup is a must have app for anyone who has rooted their devices and wants to backup and restore their phones.

Unlocking Additional Features

By rooting your Android device you also gain the ability to unlock some features that your carrier may charge for. One example is enabling free WiFi and USB tethering, which many carriers charge money for. Now, I’m not suggesting you do this. But I did want to make you aware of the fact that it is possible to do this. However, your carrier may catch on to the fact that you are using your device as a free WiFi hotspot and figure out a way to charge you for it. So use this feature at your own risk!

What are the Disadvantages of Rooting?

Bricking The number one reason not to root your device is the potential risk of “bricking” it. As mentioned earlier, “bricking” your device means screwing up your phone software so badly that your phone can no longer function properly and is pretty much as useless as a brick. You would likely need to purchase a new Android device since the manufacturer of your device will void the warranty after any attempts at rooting.


There is an increased risk of unknowingly installing malicious software when you root an Android device. Root access circumvents the security restrictions that are put in place by the Android OS. There isn’t really an effective way to tell just what the application intends to do with that “superuser” power. You are putting a lot of trust into the developer’s hands. In most cases, these applications are open source and the community can take a look at the source code to assess the risk. But, nevertheless, the risk is there. Fortunately, malicious software on rooted devices hasn’t really been a problem as of yet. But I thought it was worth mentioning since this could be a potential risk in the future. I’d recommend installing an Anti-Virus and Security App just to be safe. Lookout MobileSecurity seems to be one of the best ones available at the moment


Wednesday, 08 February 2017 04:55

What is PPC?The Importance of Pay Per Click (PPC)

Written by

What is PPC?

PPC stands for pay-per-click, a model of internet marketing in which advertisers pay a fee each time one of their ads is clicked. Essentially, it’s a way of buying visits to your site, rather than attempting to “earn” those visits organically.

Search engine advertising is one of the most popular forms of PPC. It allows advertisers to bid for ad placement in a search engine\'s sponsored links when someone searches on a keyword that is related to their business offering.

The Importance of Pay Per Click (PPC)

Many sites are built but not many sites get significant traffic. It is a difficult chore to get your site noticed, even if you are providing a valuable service. There is considerable effort and cost that goes into Search Engine Optimization (SEO) and rightfully so. If you can dial up the magic combination that gets your site organic traffic from a search engine, your site can flourish. SEO takes time. For one, it takes time for your site to be indexed. Another reason is that new sites generally suffer in things like Page Rank and length of time being indexed so it won’t rank high overall. This is where Pay Per Click comes in. PPC through sites like Google will bring people to your site based on keywords that you choose immediately. It can kickstart your site into overdrive as soon as your campaign starts. The best things about PPC are:

Many people make the mistake of not trying PPC because of the cost. The truth is that it can be very inexpensive to try and if you get success from it, it pays for itself. In some ways, it is more cost effective than many SEO programs that you see pitched all over the internet. Paying someone a lot of money for SEO doesn’t necessarily mean you will see returns from it. PPC is sure to bring people to your site and if you pick the right keywords, it will be people ready to buy your product or service. PPC is a bit of a puzzle to do correctly though. You have to figure out a budget per day, what keywords you want to use, and what is the best way to optimize your budget. 

If you don’t know where to start on a PPC campaign, find a good consultant (maybe someone like NewSouth Interactive). A good consultant will help you setup a campaign and start attracting people to your site.

 Source:- ,

Monday, 06 February 2017 09:09

Top UI Design Trends to Expect in 2017

Written by

The last couple of year saw an increase in new ideas and technology trends, with all the big browsers giving their strong support to the HTML5 and CCS3 standard, as well as the ultra-fast JavaScript engine. Web industry trends usually don’t last that long and new concepts get introduced each year and staying in touch with the latest developments is no longer an option, but a necessity for all those looking to stay ahead of the competition. This is especially important with the ever-changing landscape that is web design, so here are some predictions of what you can expect in the coming year.

1. Increase in use of responsive design

Although the responsive design has been around for some time now, experts predict that both small and big brands and businesses will focus on making their online presence responsive. This is due to two main reasons, one of them being the cost-efficiency of building a single website capable of successfully delivering content to matter what device is being used to view said content. The second reason has to do with Google’s update to their ranking algorithm, which now raises the ranking of websites that optimize their content for mobile devices, i.e. have a responsive design.

2. Businesses adopting the mobile-first design

As the name implies, mobile-first approach describes a process of designing the website for mobile devices first and then focusing on the larger screened devices. Like the responsive design, this isn’t anything new, however, mobile devices have become the primary tools used for browsing the internet. With small screen real-estate comes the need of reorganizing the content and removing all the information which might otherwise be displayed on a bigger screen, but is deemed unnecessary on mobile. This forces the major brands and businesses to reconsider what the core content is and how will they convey their message to the average user.

3. Frameworks and UI patterns

The advent of responsive and mobile-first design has shaped the way today’s websites look and operate. Pre-designed WordPress themes have become the norm, as more and more websites use them, which has unfortunately led to most of them have the same look and feel. That said, the consistent use of UI and UX patterns has led to a more unified and consistent user experience across all platforms. Keeping users happy by utilizing similar material design frameworks will most certainly become the stepping stone for all future web design practices.

4. Cards and grid UIs

One of the biggest trends in web design that has arisen from the material design guidelines is the use of card-based UIs. Tech giants such as Google, Facebook, Twitter and even Pinterest have popularized the use of cards, as they allow for the content to be broken down into smaller, easily digestible chunks which are easier to navigate through for the users. The cards play really well into responsive design and work well on both mobile, tablet and desktop. With the attention span of the average user becoming shorter and shorter, having large amounts of content organized in manageable and searchable in the way of grids and cards will become a go-to practice for both large and small businesses.

5. Even more parallax scrolling

Once used to simply navigate from the top of the page to the bottom, scrolling is now being used by web designers as a creative way of displaying and sifting through the content. It’s a technique where the mechanic of scrolling gives off a 3d sense, with the background and the foreground moving at different speeds. When used well, it works excellent with all the different varieties of delivering content, whether it’s text, images or video. However, most single-page websites tend to overuse it and in turn, make the site’s usability worse.

6. Advance in animation Brands and companies are moving away from using static images on their websites and turning to animation as a way to both engage their users and add new levels of usability to their pages. This wouldn’t be possible without the advances in CSS3, HTML5, and jQuery, and more and more designers are turning to them in order to present the user with the most interesting and engaging content. They range from small distractions while the page is loading to hover animations, navigation cues and full-screen visuals which are either the focus of the website or can later be integrated into parallax scrolling. 

7. Bold colors everywhere

The past few years have seen the rise of over-saturated rich-colored websites with vibrant hues and mesmerizing gradients. This is partly due to advances in screen technologies now being able to reproduce more colors than ever before, but also to the fact that big brands are trying to move away from what was considered safe and go for a new and more exciting approach. The best example for that is the Instagram’s logo change, which once had a flat, neutral design, and now it’s a lot more vibrant and colorful. It’s a safe bet to assume that more and more brands will move towards using vivid and rich color palettes both online and in traditional mediums.

8. Video as the content of choice

Nothing grabs the attention of the user as quickly as a good video narrative does. Unlike text, it requires little to no conscious effort to consume it, which is why all the big brands are starting to use it more and more as a content delivery method. Used for marketing practices, storytelling, and product placement, video is becoming the majority of internet traffic, due to advances in video chatting, live-streaming and its overall adoption by the social media giants. To say that video will dominate the internet as the content of choice is simply an understatement.

9. Fewer stock images and more originality

Using similar WordPress themes and UI patterns has led to the majority of websites basically looking and working in strikingly similar ways. Using stock images and videos on top of that can only add more similarity where almost everything looks the same. Standing out from the competition has become more important than ever, which is why it’s safe to assume that brands will work towards authenticity in order to attract new customers. This means that we’ll start to see a more diverse and unique way businesses produce and present their content.


Social media can be easy as posting, tweeting, liking and sharing but for those, who have their hands in the business and the internet marketing, social media is much vast and highly difficult to win over. There is much that goes beyond posting and sharing and these are better known as social media marketing strategies.

Before we start discussing the topic, one thing we would like to say is though social media networks are greatly used for the business promotions and conversions but these networks should not only be treated as “promotional tools”. Instead, you need to keep in mind the meaning of “social” while using these networks for the promotion of your business. In simple, the social media marketing strategies should be implemented while focusing on the network you are using, rather than just focusing on your business promotion and sales.

“We don’t have a choice on whether we do social media, the question is how well we do it.” – Erik Qualman

However, using social media platforms for your business or blog can be overwhelming even if you have a great experience because there is a lot to learn and implement. Here we are mentioning some social media facts that you must know if you are engaged in social media marketing or are interested in increasing your knowledge about it.

Social media marketing is a long-term strategy

The world of social media marketing has evolved much; the things that were easy to implement on the social media, now take much time and efforts because of the heightened competition. So, the first thing that you should know and believe is social media marketing is going to be a long-term strategy. So, you should not expect the changes to happen overnight; instead, you should focus on your efforts so that the time can be reduced.

If you are running a small business then it may be daunting for you to invest efforts and time into something that does not promise immediate results. But, you need to do so if you want an effective online presence for your business.

Success is driven by engagement

Engagement through the social media is the key to success. More you engage with your audience, more you will get. You must agree that everyone likes to the recognized, noticed and being special. So, when you will engage with your audience, they will feel special and a connection will be created between you. Such audience can do a lot to expand your reach. However, there is nothing special you need to do for improving your engagement with the audience. Read their posts, like them, share them and comment on them for enhancing your engagement. It will give them a chance to view your profile and share it with others.

“Engage and encourage your audience.....Social media is a community effort, everyone is an important asset”- Gurpreet Walia, CEO of Suffescom Solutions.

Social media profiles are listed in the search results

Though many of you will be aware of this fact but we would like to remind you that social media profiles are considered in the ranking. When you search a brand name on the Google, then its Facebook, Twitter or LinkedIn profiles are shown in the first page results. What we want to tell you is create your social media profiles carefully and show your best on them. Along with that, keep these profiles up-to-date even if you don't use them on the regular basis. Create these profiles while keeping in mind that they are going to represent your brand.

Visual content is the king on social media

When you will ask people that whether they would like to watch a video or read an article then about 90% of them will choose the former option. Even you can ask the question from yourself and we are sure, you would choose the video. Videos not only convey your message in an effective way but also consume less time. So, focus on creating videos rather than creating long blogs and articles. Don't hesitate to spend on professional video production as the quality of the video will impact your brand's image.

You must stop doing these things

  • Don't compare your stats with others.
  • Don't post because you have to post; focus on the quality of the content.
  • Social media has much to offer, so try not to get distracted. While using any social media network go directly to the task you have to perform; there is a multitude of news feeds to distract you but don't get lost.
  • Stay away from direct promotion.

Succinctly, the social media is about sharing ideas, building relationships and creating your own identity. If you are using it for the SEO and marketing then you would need to keep these things in mind while creating and implementing your strategies.


The Hypertext Transfer Protocol (HTTP) underpins the World Wide Web and cyberspace. If that sounds dated, consider that the version of the protocol most commonly in use, HTTP 1.1, is nearly 20 years old. When it was ratified back in 1997, floppy drives and modems were must-have digital accessories and Java was a new, up-and-coming programming language. Ratified in May 2015, HTTP/2 was created to address some significant performance problems with HTTP 1.1 in the modern Web era. Adoption of HTTP/2 has increased in the past year as browsers, Web servers, commercial proxies, and major content delivery networks have committed to or released support.

Unfortunately for people who write code for the Web, transitioning to HTTP/2 isn’t always straightforward and a speed boost isn’t automatically guaranteed. The new protocol challenges some common wisdom when building performant Web applications and many existing tools—such as debugging proxies—don’t support it yet. This post is an introduction to HTTP/2 and how it changes Web performance best practices.

Binary frames: The ‘fundamental unit’ of HTTP/2

One benefit of HTTP 1.1 (over non-secure connections, at least) is that it supports interaction with Web servers using text in a telnet session on port 80: typing GET / HTTP/1.1 returns an HTML document on most Web servers. Because it’s a text protocol, debugging is relatively straightforward.

Instead of text, requests and responses in HTTP/2 are represented by a stream of binary frames, described as a “basic protocol unit” in the HTTP/2 RFC. Each frame has a type that serves a different purpose. The authors of HTTP/2 realized that HTTP 1.1 will exist indefinitely (the Gopher protocol still is out there, after all). The binary frames of an HTTP/2 request map to an HTTP 1.1 request to ensure backwards compatibility.

There are some new features in HTTP/2 that don’t map to HTTP 1.1, however. Server push (also known as “cache push”) and stream reset are features that correspond to types of binary frames. Frames can also have a priority that allows clients to give servers hints about the priority of some assets over others.

Other than using Wireshark 2.0, one of the easiest ways to actually see the individual binary frames is by using the net-internals tab of Google Chrome (type chrome://net-internals/#http2 into the address bar). The data can be hard to understand for large Web pages. Rebecca Murphey helpfully wrote a useful tool for displaying it visually in the command line.

Additionally, the protocol used to fetch assets can be displayed in the Chrome Web.

All of the HTTP/2 requests in this listing use a secure connection over Transport Layer Security (TLS). All major browsers require HTTP/2 connections to be secure. This is done for a practical reason: an extension of TLS called Application-Layer Protocol Negotiation (ALPN) lets servers know the browser supports HTTP/2 (among other protocols) and avoids an additional round-trip. This also helps services that don’t understand HTTP/2, such as proxies—they see only encrypted data over the wire.

Reducing latency with multiplexing

A key performance problem with HTTP 1.1 is latency, or the time it takes to make a request and receive a response. This issue has become more pronounced as the number of images and amount of JavaScript and CSS on a typical Web page continue to increase. Every time an asset is fetched, a new TCP connection is generally needed. This requirement is important for two reasons: the number of simultaneous open TCP connections per host is limited by browsers and there’s a performance penalty incurred when establishing new connections. If a physical Web server is far away from users (for example, a user in Singapore requesting a page hosted at a data center on the U.S. East Coast), latency also increases. This scenario is not uncommon—one recent report says that more than 70% of global Internet traffic passes through the unmarked data centers of Northern Virginia.

HTTP 1.1 offers different workarounds for latency issues, including pipelining and the Keep-Alive header. However, pipelining was never widely implemented and the Keep-Alive header suffered from head-of-line blocking: the current request must complete before the next one can be sent.

In HTTP/2, multiple asset requests can reuse a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the requests and response binary frames in HTTP/2 are interleaved and head-of-line blocking does not happen. The cost of establishing a connection (the well-known “three-way handshake”) has to happen only once per host. Multiplexing is especially beneficial for secure connections because of the performance cost involved with multiple TLS negotiations.

Implications for Web performance: goodbye inlining, concatenation, and image sprites?

HTTP/2 multiplexing has broad implications for frontend Web developers. It removes the need for several long-standing workarounds that aim to reduce the number of connections by bundling related assets, including:

Concatenating JavaScript and CSS files: Combining smaller files into a larger file to reduce the total number of requests.

Image spriting: Combining multiple small images into one larger image.

Domain sharding: Spreading requests for static assets across several domains to increase the total number of open TCP connections allowed by the browser.

Inlining assets:  Bundling assets with the HTML document source, including base-64 encoding images or writing JavaScript code directly inside

A common concatenation pattern has been to bundle stylesheet files for different pages in an application into a single CSS file to reduce the number of asset requests. This large file is then fingerprinted with an MD5 hash of its contents in the filename so it can be aggressively cached by browsers. Unfortunately, this approach means that a very small change to the visual layout of the site, like changing the font style for a header, requires the entire concatenated file to be downloaded again.

When smaller asset files are fingerprinted, significant amounts of JavaScript and CSS components that don’t change frequently can be cached by browsers—a small refactor of a single function no longer invalidates a massive amount of JavaScript application code or CSS.

Lastly, deprecating concatenation can reduce frontend build infrastructure complexity. Instead of having several pre-build steps that concatenate assets, they can be included directly in the HTML document as smaller files.

Potential downsides of using HTTP/2 in the real world

Optimizing only for HTTP/2 clients potentially penalizes browsers that don’t yet support it. Older browsers still prefer bundled assets to reduce the number of connections. As of February 2016, reports global browser support of HTTP/2 at 71%. Much like dropping Internet Explorer 8.0 support, the decision to adopt HTTP/2 or go with a hybrid approach must be made using relevant data on a per-site basis.

As described in a post by Kahn Academy Engineering that analyzed HTTP/2 traffic on its site, unbundling a large number of assets can actually increase the total number of bytes transferred. With zlib, compressing a single large file is more efficient than compressing many small files. The effect can be significant on an HTTP/2 site that has unbundled hundreds of assets.

Using HTTP/2 in browsers also requires assets to be delivered over TLS. Setting up TLS certificates can be cumbersome for the uninitiated. Fortunately, open source projects such as Let’s Encrypt are working on making certificate registration more accessible.

A work in progress

Most users don’t care what application protocol your site uses—they just want it to be fast and work as expected. Although HTTP/2 has been officially ratified for almost a year, developers are still learning best practices when building faster websites on top of it. The benefits of switching to HTTP/2 depend largely on the makeup of the particular website and what percentage of its users have modern browsers. Moreover, debugging the new protocol is challenging and easy-to-use developers tools are still under construction.

Despite these challenges, HTTP/2 adoption is growing. According to researchers scanning popular Web properties, the number of top sites that use HTTP/2 is increasing, especially after CloudFlare and WordPress announced their support in late 2015. When considering a switch, it’s important to carefully measure and monitor asset- and page-load time in a variety of environments. As vendors and Web professionals educate themselves on the implications of this massive change, making decisions from real user data is critical. In the midst of a website obesity crisis, now is a great time to cut down on the total number of assets regardless of the protocol.

Friday, 27 January 2017 05:15

Icons As Part Of A Great User Experience

Written by

Icons are an essential part of many user interfaces, visually expressing objects, actions and ideas. When done correctly, they communicate the core idea and intent of a product or action, and they bring a lot of nice benefits to user interfaces, such as saving screen real estate and enhancing aesthetic appeal. Last but not least, most apps and websites have icons. It’s a design pattern that is familiar to users.

Despite these advantages, icons can cause usability problems when designers hide functionality behind icons that are hard to recognize. An icon’s first job is to guide users to where they need to go, and in this article we’ll see what it takes to make that possible.

Types Of Icons

As mentioned, an icon is a visual representation of an object, action or idea. If that object, action or idea is not immediately clear to users, the icon will be reduced to visual noise, which will hinder users from completing their task. There are three types of icons: “universal,” “conflicting” and unique icons. Let’s focus on each type and its impact on the user experience.


A few icons enjoy nearly universal recognition among users. The symbols for home, printing, searching and the shopping cart are such icons. Easily recognizable icons (Image: Icons8) There is only one problem: Universal icons are rare. Beyond the examples cited above, most icons are ambiguous. They can have different meanings depending on the interface.


Trouble comes when you implement a commonly used pictogram that has contradictory meanings. The heart and the star are excellent examples. Not only does the functionality associated with these icons vary from app to app, but these two icons compete with each other.

As a result, these icons are hard to interpret precisely. Even in the context of an individual app, these symbols can be very confusing when the user expects one outcome and gets another. This impedes the user’s understanding of these icons and discourages them from relying on them in future experiences.


Icons are especially bad for anything abstract because they generally are not strong visual representations. How do you describe a unique object or action? Apple’s icon for its Game Center app, for example, is a group of colorful circles. What does the Game Center icon mean? How does it relate to gaming?

The Game Center icon fails to convey the concept of games. As another example, when Google decided to simplify its Gmail interface and move everything behind an abstract icon, it apparently got a stream of support requests like, “Where is my Google Calendar?” Gmail user interface for the desktop (View large version) An icon might make complete sense once you know what it’s supposed to represent, but it can take some time for first-time users to figure things out. Another problem is that first-time users tend to avoid interface elements that they don’t understand. It’s human nature to distrust the unknown.

Practical Recommendations For Designing With Icons

 Let’s take a look at some simple techniques and strategies for choosing a proper icon for a given context. Obviously, picking an icon often is a long story, and whatever the choice is, testing icons in interfaces with real users is crucial.


Icons can save space by reducing text, but at the price of recognition. An icon can represent a thousand different words, and that is exactly the problem. It would be a serious misconception to assume that users either would be familiar with your abstract pictograms or would be willing to spend the extra time discovering what each means.

Users are often intimidated by unfamiliar interfaces. What they really want is a clear idea of what will happen before they perform an action in an unfamiliar app. That’s why your icons need to set clear expectations for users before they click or tap on them.

A good user experience can be measured in many ways, one of which is how much it frees the user from having to think. Clarity is the most important characteristic of a great interface. To avoid the ambiguity that plague most icons, we can include a text label to clarify an icon’s meaning in a particular context, especially for complex actions and abstract functions.

UserTesting conducted a series of tests, comparing labelled icons to unlabelled icons. It found that:

  • users were able to correctly predict what would happen when they tapped a labelled icon 88% of the time;
  • that number dropped to 60% for unlabelled icons. For unlabeled icons that were unique to the app, users correctly predicted what would happen when they tapped an icon only 34% of the time.


So, not all users are familiar with conventional icons, which makes an icon-only interface potentially harder for them to use. On the other hand, experienced users might regard an interface with text labels everywhere to be cluttered.

How do we make everyone happy? As Michael Zuschlag mentions, icons alone will suffice when at least two of the following three conditions are met:

  • space is very limited (i.e. too small for text alone);
  • the icons are standardized (e.g. they are universal);
  • the icons represent objects with strong physical analogs or visual attributes (e.g. a red rectangle to set the page’s background as red).


Some designers believe labels defeat the purpose of icons and clutter the interface. To avoid using labels, they use tooltips. However, tooltips are a poor substitute for text labels. The fact that text labels never need graphic tooltips is a pretty good clue that text is better than icons. Another major disadvantage is that tooltips fail to translate well to touchscreens. Another common technique is to use tutorials or coach marks or popover hints. However, users might simply rush through the tutorial or forget everything they’ve learned when they next launch your app. Like tooltips, tutorials are no substitute for intuitive design; rather, the opposite. As Matthew at CocoaLove says, “Your app’s tutorial screen is just a list of ways you’ve failed.”


Icons accompanied by labels make information easier to find and scan, as long as they’re placed in the right spot. Place icons according to the natural reading order. As argued by UX Movement, there are two important factors in an icon’s location:

  • In order for icons to serve as a visual scanning aid, users need to see them before they see the accompanying label. Place icons to the left of their labels so that users see them first.
  • Align the icon with the label’s heading, instead of centering it with the heading and body. Seeing the icon first will help users to scan the page more easily.


Iconography lies at the heart of UI design. It can make or break the usability of an interface. Every icon should serve a purpose. It should help the user do what they need to do without requiring additional effort. When designed correctly, icons guide users intuitively through a workflow, without relying much on copy. Don’t make your users think. Make clarity in the app a priority!


Wednesday, 25 January 2017 04:54

Difference between HD and HDX

Written by

Main Difference

There are different formats in which a video can be shot and with the developing technology there are several advancements taking place which make sure that the need for people to view content in sound quality remains an important feature. The main difference between two terms HD and HDX can explain with the number of horizontal lines which are used in them. The number of horizontal lines in HD quality is 480 or 576 which is the lower quality while the number of horizontal lines in HDX is always greater than 720 which is the newest technology introduced.

Basic of Distinction HD HDX
Name High Definition High Definition Extra
Line The number of horizontal lines in HD quality is 480 or 576 The number of horizontal lines in HDX is always greater than 720
Origin Started in 1940's by the American Broadcasting System Introduced by Vudu in the recent Years
Visual Quality Gives the feel of unreal world Gives the feeling of reality
Sound Quality Best Better
Video Standard 1080p 480p,720p,1080p
Size Limit 1TB + GB+


This is the type of technology which is new in the market and provides people with the opportunity to view content in even high quality than before. The former type of visual limit was High Definition, but Vudu, which is an online streaming platform has given its users to watch content in even better quality and that is why the HDX was introduced. This service allows the persons using it, to watch movies, stream live content and watch shows which are on the air as soon as possible but with the emergence of Netflix is had to come up with something better to keep the viewers entertained. This was made possible with the help of giving them the option of seeing all the stuff in the third content which uses technologies which are even better than the other two which are SD and HD. The technique is not that common and is mostly known only to the owners of Vudu, but some details about it are available. There are four different technologies with the first one called psycho-visual Processing which dampens and removes the artificial nature of the video which we see in the form of dark sky and waters and give it a better look. The next one is the film grain preservation, which maintains the slight errors that the makers of the movie decided to leave so that people can get the feel of seeing the real thing. Statistical Variable Bitrate is the third one who makes sure people are able to see the same quality of video throughout. The last one is the Color Gradient Processing which makes sure that the colors remain true.


This is the short form for High Definition and is a term which is used for videos which are available in the best quality. This is better than the standard definition and has an even better resolution so that people can enjoy the best views possible. There is no proper definition of this term which can cause confusion, but the main way it is used is that any video which has more than 480 horizontal lines are considered HD in America while the ones which have more than 576 lines are considered HD in Europe. This is the standard, but no video should be less than 480 or else it falls in the SD category. The image standard is also better since they are captured at the rates which are higher than others, for America it is 60 frames in a second while for Europe it is 50 frames in a second. There are many instances in which a television show is filmed in the HD and gives the impression of a movie, this process though is different and is known as filmizing. The first HD systems were introduced in 1941 which had 405 lines and will not qualify as top quality video in today’s world. Several changes took place over the years and now we have reached a standard which is set for such content. There are several terms which help in getting to know about it more which include display resolution which is the number of lines. The scanning system which has two types of scanning helps in getting to know about the image. The last one is the number of frame per seconds which is the most common broadcasting limit in the world.

Key Differences

  • HDX is the term which is known as High Definition Extra while HD is a term which is known as High Definition.
  • The videos which are filmed in HD should be at least 480p and can reach up to 720p and 1080p while the videos which are shot in HDX are only in 1080p.
  • The HDX uses TruFilm technology to create the visuals it has while HD does not have any such processes.
  • HD has been used for several decades and started in 1940’s by the American Broadcasting System while HDX is the standard which has been introduced by Vudu in the recent years.
  • The number of horizontal lines in HD quality is 480 or 576 while the number of horizontal lines in HDX is always greater than 720.
  • The size of HDX videos is much larger in size and can reach up to terabytes while the size of HD videos is relatively smaller and can reach few gigabytes.
  • The sound quality in the HDX is much better than that of HD.
  • The video definition is much better in HD since HDX tries to give the user more real experience, hence some imperfections remain.


Monday, 23 January 2017 05:24

13 Free Data Visualization Tools

Written by

Data doesn’t have to be boring. Adding a dash of visual appeal to raw data can make it easily comprehensible and instantly appealing.

In the interest of (1) making your data more user-friendly and (2) not boring the eyes out of anyone who sees your work, picking a trusty data visualization tool is a must.

With so many tools out there, choosing the right one that serves your specific needs can be a tedious task. As a first step, read this detailed guide on the factors to consider when choosing your perfect data visualization tool.

Here are some of the popular Data Visualization Tool

1. D3.js

D3.js — often times, it’s simply called D3 — is the most well-known data visualization library today.

D3 gives developers the ability to create even the most complex charts and graphs. It uses open web technologies — HTML, SVG, and CSS — which is great if you care about cross-platform support (because iOS/Android apps, desktop apps, web browsers, and other such platforms can all run these web technologies).

Note that D3 is designed for modern browsers. It won’t work with old browsers–anything before IE9, and you might have browser compatibility issues. Another thing to consider is that working with D3 will require you to invest some time into learning the D3 API. However, once you learn how to use it, D3 can be an insanely powerful data visualization tool.

D3 is an open source project. Be sure to check out this gallery of D3 examples.

2. FusionCharts

FusionCharts has a collection of over 90 charts and more than 960 maps which can serve the full range of needs of developers and professional data visualization experts. With its support going all the way back to the ancient IE6, browser compatibility is hardly an issue.

FusionCharts is device/platform-agnostic and works easily with both JSON and XML data formats. Here is a sample of their data visualization capabilities. While FusionCharts is slightly heavier on the pocket as compared to some of the other tools in this list, it lets you try all the charts for free before you decide to purchase it.

3. Tableau Public

Tableau Public is capable, easy to use, and free. What more can you want? With a huge arsenal of maps, graphs, and charts, it is a firm favorite for the non-developer audience.

The free version of Tableau attaches a big footer of Tableau branding in the charts you generate; non-commercial customers may be OK with that, but if you aren’t, you can pay to get the cleaner, brand-free versions of the same charts.

Take a look at this visualization of the history of the Dow 30 to get an idea of what Tableau can do for your data visualization projects.

4. Charted

Charted has one of the cleanest user interfaces amongst all the charting tools I’ve seen. It’s extremely easy to use as well. All you have to do is upload a CSV file, or a Google Sheets link, and it’ll generate the chart for you. Moreover, it refreshes your chart every 30 minutes, so your chart’s data source remains fairly up-to-date.

The Charted service is free, and its source code is also freely available if you would like to run it on your own web server.

5. Google Charts

Google Charts is user-friendly and compatible with all browsers and platforms. It covers a wide range of data visualization types — from simple line and bar graphs to complex hierarchical tree maps — making Google charts suitable for almost any project.

Check out the gallery that showcases the various charts and visualizations that Google Charts offers.

6. Flot

Flot is an easy-to-use charting library that provides very elegant charts and graphs. It allows advanced user-interactions like panning, zooming, resizing, switching a data series on and off, and more.

Flot has a wide variety of other user-created plugins available from the community for everything, from new plot types to advanced labels.

7. Chartist.js

If you’re transitioning from Excel and looking for something that doesn’t seem so old-school, you’ve got to give Chartist a look. Created — like all good products — out of frustration with the status quo, it includes a large array of charts that are responsive, animated, and rendered beautifully.

Unlike other bloated apps, Chartist is a small JS library weighing in at 10kb with no dependencies.

8. Highcharts

Highcharts, another big name in the data visualization domain, offers you a wide selection of charts and maps. They offer many plugins that allow you to experience all of its powerful features without needing to deal with JavaScript.

Highcharts is free for non-commercial purposes.

9. Datawrapper

Datawrapper is an extremely easy-to-use data visualization tool for plotting interactive charts. All you need to do is upload your data via a CSV file, choose the chart you want to plot, and that’s basically it, you’re good to go! It’s a very popular tool among journalists, often using Datawrapper to embed live charts into their news articles.

The fact that it’s a tool of choice for most of the non-techie people out there tells you how easy Datawrapper is to use. Read this tutorial to get started with Datawrapper.

10. dygraphs

dygraphs is a JavaScript charting library that allows for panning, zooming, and mouseover actions. It handles and interprets dense data sets very effectively. dygraphs can support browsers as far back as IE8 without any browser support issues.

11. Raw

Raw bridges the gap between spreadsheets and vector graphics. It’s built on the D3.js platform. If you’re not a programmer, Raw could be the perfect data visualization tool for you.

Raw provides a selection of 16 ready-to-use chart types. Customization is one of the biggest positive aspects of Raw, for it allows you to use your own custom layouts.

12. TimelineJS

TimelineJS is a great tool for creating interactive, visually rich timelines without having to write code. Popular sites like TIME and Radiolab use it frequently to create timelines that display a great deal of information in a small area.

TimelineJS has built-in API support for a variety of data sources like Wikipedia, Twitter, SoundCloud, Vine, Google Maps, and YouTube.

Here’s an example of a timeline developed with TimelineJS.

13. Polymaps

As its name suggests, Polymaps is for creating catographical data visualizations. It pulls in data from OpenStreetMap, Bing, and other map image providers, while also rendering its own representations. Both its image- and vector-based maps look stunning, as you can see from their wide range of examples.



About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018