Wednesday, 16 November 2016 05:28

4 Points on CRM Integration with Your Website

Written by

 Websites are not just company showcases on the Internet anymore, as more and more people shop and order services online. Percentage of online deals varies across industries, yet in some cases up to 100% of company margins come from online orders (various service providers, etc.) In order to streamline ecommerce activities a lot of companies adopt CRM solutions that allow for profound automation. According to the Gartner research, nearly 80% of the total CRM revenue comes from North America and Western Europe, and around 47% of the total CRM software revenue worldwide is generated by CRM SaaS solutions. This is the market where only customer-oriented companies succeed. Will yours be amongst them?

What is CRM Integration? What is CRM integration? And how can you integrate your CRM to your CMS? CRM stands for Customer Relations Management. It is a software product type allowing for automated collection and processing of various data that your customers submit on your website. However, CRM is not only about collecting basic customer data, as standard web analytics tools do this well enough. CRM software helps to do the following:

• Managing your existing customer base and adding new prospects from leads

• Connecting your marketing and sales operations into a streamlined process

• Building a centralized marketing, sales, and customer support strategy

• Increasing your ROI (nearly $6 increase per $1 invested).

CRM integration implies merging your CRM software with your website’s CMS (content management system, like WordPress, Drupal or Joomla). After the integration process is completed, you will receive a robust and powerful tool that will help you manage your customer base better in order to meet their needs. Its main benefit is that you will have an explicit picture of your sales process and you’ll be able to leverage the incoming data by shaping your sales activities and yielding better results.

Your website is your company’s focal point bringing various data from interactions with visitors. How do they view contents, download materials, leave their comments, participate in polls or read your company’s tutorials and showcases? You may also display your social media feeds, blogs, and reports on company events so that visitors get a full grasp of the company image. And each case of interaction can be tracked and analyzed.

Combining CRM software with analytics tools you can maximize the impact and increase the overall sales efficacy. The last but not least, CRM allows you to sort submitted contacts based on submission page and website surfing history, so you can assign appropriate specialists for the follow-up process. For instance, you may send helpdesk support invitations to people who browsed your Help section and let your sales managers communicate with leads that looked through your whitepapers and watched your demos. You can also integrate automatic web forms submission to gather primary customer data, deploy auto-response rules that send certain emails whenever customers take some action (welcome emails, announcements, suggested tutorials or product usage examples, etc.) All of this can have a significant impact on turning one-time visitors into long-term customers.

So integrating CRM into your website can give you the following:

• Convenient reports on your prospects and leads

• Combining pre-sales warm-up with after sales follow-up to ensure customer satisfaction

• General reduction of manual sales processing and reducing the influence of personal contacts (so that sick leave of a sales representative will not stall their ongoing sales)

• Connecting various disparate processes into a smooth workflow to optimize your sales.

How to Integrate Your Website and CRM

There are multiple ways you can integrate the CRM of your choice with your website:

• You can choose a solution like Salesforce, which integrates with a bunch of CMS automatically, but costs quite a sum annually. Your employees will most likely be able to install and configure the software themselves and you will have provider’s support if need be.

• You can order CRM integration as a service from a specialized company as many other (open-source, i.e.) CRM solutions may be lacking some modules (automated integration into CMS in particular), so these modules should be added and configured by specialists to reduce the configuration and data synchronization time.

• Alternatively, you can order a custom CRM solution if you have specific needs. It will take longer than buying an out-of-the-box product, yet it will be personally tailored for your requirements.

The path you take depends on your preferences and company goals.

Types of CRMs

There is a quite large roster of CRM solutions on the market today, so you can choose the one that works best for you. Two most common types are standalone CRM solutions, installed on your server, and SaaS CRM service.

Standalone CRM.

The data resides on your hosting servers, and you have the full control over it. There’re multiple downsides of this option. Not only standalone software can cost a lot, but also the product may receive poor support from the vendor. And most likely you’ll need to hire specialists to tweak your system and maintain it stable.


The other option is adopting a SaaS solution to do the job. All software maintenance is delivered by a provider. This kind of service can also seem pricey if you have numerous corporate users operating with it but SaaS solutions generally get more support and upgrades from suppliers. Both options allow nearly 90% of small and medium businesses to cover their needs due to a deep pool of out-of-the-box features available.

Custom CRM.

Opting for custom made solutions is only necessary for unique corporate workflows and highly specifics sets of tasks. For instance, we at DDI Development have experience with building complex CRM systems that integrate not only with a website CMS, but also have HR and recruiting modules. Basically, these solutions automate almost all aspects of internal and external corporate activities


AMAZON HAS BECOME the latest tech giant that’s giving away some of its most sophisticated technology. Today the company unveiled DSSTNE (pronounced “destiny”), an open source artificial intelligence framework that the company developed to power its product recommendation system. Now any company, researcher, or curious tinkerer can use it for their own AI applications. It’s the latest in series of projects recently open sourced by large tech companies all focused on a branch of AI called deep learning. Google, Facebook, and Microsoft have mainly used these systems for tasks like image and speech recognition. But given Amazon’s core business, it’s not surprising that the online retailer’s version is devoted to selling merchandise.

“We are releasing DSSTNE as open source software so that the promise of deep learning can extend beyond speech and language understanding and object recognition to other areas such as search and recommendations,” the Q&A section of Amazon’s DSSTNE GitHub page reads. “We hope that researchers around the world can collaborate to improve it. But more importantly, we hope that it spurs innovation in many more areas.” Along with the idealistic rhetoric, open sourcing AI software is a way for tech industry rivals to show off and one-up each other. When Google released its TensorFlow framework last year, it didn’t offer support for running the software across multiple servers at the same time. That meant users couldn’t speed up their AI computations by stringing together clusters of computers the same way Google could running a more advanced version of the system internally.

That created an opening for other software companies like Microsoft and Yahoo to release their own open source deep learning frameworks that support distributed computing clusters. Google has since caught up, releasing a version of TensorFlow that supports clusters earlier this year. Amazon claims its system takes distribution one step further by enabling users to to spread a deep learning problem not just across multiple servers, but across multiple processors within each server. Amazon also says DSSTNE is designed to work with sparser data sets than TensorFlow and other deep learning frameworks. Google uses TensorFlow internally for tasks such as image recognition, where it can rely on the Internet’s vast store of, say, cat photos to train its AI to recognize images of cats. Amazon’s scenarios are quite different. The company does sells millions of different products. But the number of examples of how the purchase of one product relates to the purchase of another are relatively few by comparison to cats on the Internet. To make compelling recommendations—that is, to recommend products that customers are more likely to click on and buy—Amazon has a strong incentive to create a system that can make good predictions based on less data. By open sourcing DSSTNE, Amazon is increasing the likelihood that some smart person somewhere outside the company will help the company think of ways to make the system better.


Friday, 11 November 2016 05:18

Computer Invention-Electro Smart Pen

Written by

This digital pen is a computer invention that transmits writing into digital media.

Although touch screen devices represent a movement away from paper, approximately eighty-percent of businesses still use paper based forms.

Many professions hand-write their notes, tables, diagrams and drawings instead of using tablets or other devices.

The computer pen is comparable to a regular ink pen (even uses refillable ink) that writes on regular paper, except it has an optical reader that records motion, images and coordinates. The recorded data is then transmitted to a computer via a wireless transmitter.

You can browse and edit your written notes, diagrams, tables, or drawings.

Another useful feature of this computer invention is that hand-written digital files can be easily converted into text fonts for use in word documents or emails.

Digital pen technology was first developed by the Swedish inventor and entrepreneur Christer Fåhraues.

Fåhraues is a physician and has an honorary doctorate degree in technology from Lund University in Sweden, and a M.Sc. degree in Bioengineering from the University of California San Diego.

Fåhraues served as the Chief Executive Officer and Chairman of Anoto Group AB, a company he originally founded in 1996 as C Technologies to license his digital pen technology.

This computer invention has been licensed to companies around the world for various commercial products. Applications include data/signature capture, completing forms, mapping, surveying, document management, paper replay, whiteboards, toys and education.

There are great expectations for digital pen technology over the next few years.

Sources: Anoto; Logipen

- See more at:  

W3C's decision to publish a DRM framework will keep the Web relevant and useful.

The World Wide Web Consortium (W3C), the group that orchestrates the development of Web standards, has today published a Working Draft for Encrypted Media Extensions (EME), a framework that will allow the delivery of DRM-protected media through the browser without the use of plugins such as Flash or Silverlight.

EME does not specify any DRM scheme per se. Rather, it defines a set of APIs that allow JavaScript and HTML to interact with decryption/protection modules. These modules will tend to be platform-specific in one way or another and will contain the core DRM technology.

W3C Chief Executive Jeff Jaffe announced W3C's intention yesterday. This was met with a swift response from the Electronic Frontier Foundation (EFF), which tweeted, "Shame on the W3C: today's standards decision paves the way for DRM in the fabric of the open web."

The EFF, along with the Free Software Foundation (FSF) and various other groups, has campaigned against the development of the EME specification. They signed an open letter voicing their opposition and encouraged others to sign a petition against the spec.

The EFF argues that EME runs counter to the philosophy that "the Web needs to be a universal ecosystem that is based on open standards and fully implementable on equal terms by anyone, anywhere, without permission or negotiation." EME undermines the Web's compatibility by allowing sites to demand "specific proprietary third-party software or even special hardware and particular operating systems."

Further, the groups argue that the Web is moving away from proprietary, DRM-capable plugins. The EFF writes that "HTML5 was supposed to be better than Flash, and excluding DRM is exactly what would make it better," and the petition claims that "Flash and Silverlight are finally dying off."

As a practical matter, it's unlikely that the petition could ever be meaningful. Even if W3C decided to drop EME, there are enough important companies working on the spec—including Netflix, Google, and Microsoft—that a common platform will be built. The only difference is whether it happens under the W3C umbrella or merely as a de facto standard assembled by all the interested parties. Keeping it out of W3C might have been a moral victory, but its practical implications would sit between slim and none. It doesn't matter if browsers implement "W3C EME" or "non-W3C EME" if the technology and its capabilities are identical.

These groups are opposed to DRM on principle. The FSF brands systems that support DRM as "defective by design," and insofar as DRM can impede legally protected fair use of media, it has a point. There's a tension between DRM (itself legally protected courtesy of the DMCA) and permissions granted by copyright law.

However, it's not clear that EME does anything to exacerbate that situation. The users of EME—companies like Netflix—are today, right now, already streaming DRM-protected media. It's difficult to imagine that any content distributors that are currently distributing unprotected media are going to start using DRM merely because there's a W3C-approved framework for doing so.

The EME opponents' claim that Flash and Silverlight are dying off has an element of technical truth, but it's also disingenuous.

The technical truth? Silverlight has apparently ceased all development. Flash is still actively developed, with Adobe outlining a ten-year plan for its future development, but the company is also investing heavily in HTML5 tooling and is actively working to ensure that developers have the software to use HTML5 in situations that previously would have used Flash.

It's also true that Adobe has discontinued Flash on smartphones. As a result, there's a thriving market of Internet devices that can't use Flash or Silverlight at all. These currently represent only a minority of Internet-connected devices—about 89 percent of browsing is still done on PCs, and an overwhelming majority of them do have Flash installed—but it's a minority that's growing.

But the claim is disingenuous when used as an argument against DRM. Deprived of the ability to use browser plugins, protected content distributors are not, in general, switching to unprotected media. Instead, they're switching away from the Web entirely. Want to send DRM-protected video to an iPhone? "There's an app for that." Native applications on iOS, Android, Windows Phone, and Windows 8 can all implement DRM, with some platforms, such as Android and Windows 8, even offering various APIs and features to assist this.

In other words, the alternative to using DRM in browser plugins on the Web is not "abandoning DRM"; it's "abandoning the Web."

It's hard to see how this is in the Web's best interest. Mozilla, in particular, is fighting this very outcome. The underlying justification for its development of the Firefox OS smartphone platform is that it wants to ensure that the Web itself is the application platform and that software and services aren't locked away in a series of proprietary, platform-specific apps.

And yet it's precisely this outcome that opposition to EME will produce.

Moreover, a case could be made that EME will make it easier for content distributors to experiment with—and perhaps eventually switch to—DRM-free distribution.

Under the current model, whether it be DRM-capable browser plugins or DRM-capable apps, a content distributor such as Netflix has no reason to experiment with unprotected content. Users of the site's services are already using a DRM-capable platform, and they're unlikely to even notice if one or two videos (for example, one of the Netflix-produced broadcasts like House of Cards or the forthcoming Arrested Development episodes) are unprotected. It wouldn't make a difference to them.

That wouldn't be the case if Netflix used an HTML5 distribution platform built on top of EME. Some users won't have access to EME, either because their browsers don't support the specification at all, or because their platform doesn't have a suitable DRM module available, or because the DRM modules were explicitly disabled. However, every other aspect of the Netflix Web application could work in these browsers.

This kind of Netflix Web app would give Netflix a suitable testing ground for experimenting with unprotected content. This unprotected content would have greater reach and would be accessible to a set of users not normally able to use the protected content. It would provide a testing ground for a company like Netflix to prove that DRM is unnecessary and that by removing DRM, content owners would have greater market access and hence greater potential income. Granted, it might also come with the risk of prolific piracy and unauthorized redistribution, so it might serve only to justify the continued use of DRM.

With plugins and apps, there's no meaningful transition to a DRM-free world. There's no good way for distributors to test the waters and see if unprotected distribution is viable. With EME, there is. EME will keep content out of apps and on the Web, and it creates a stepping stone to a DRM-free world. That's not hurting the open Web—it's working to ensure its continued usefulness and relevance.


Monday, 07 November 2016 04:29

Cell Phones Are the New Paper

Written by

Next Year, you can drop paper boarding passes and event tickets and just flash your phone at the gate.

 Next Year, you can drop paper boarding passes and event tickets and just flash your phone at the gate.Log in to your airline's Web site. Check in. Print out your boarding pass. Hope you don't lose it. Hand the crumpled pass to a TSA security agent and pray you don't get pulled aside for a pat-down search. When you're ready to fly home, wait in line at the airport because you lacked access to a printer in your hotel room. Can't we come up with a better way?

What is it? The idea of the paperless office has been with us since Bill Gates was in short pants, but no matter how sophisticated your OS or your use of digital files in lieu of printouts might be, they're of no help once you leave your desk. People need printouts of maps, receipts, and instructions when a computer just isn't convenient. PDAs failed to fill that need, so coming to the rescue are their replacements: cell phones.

Applications to eliminate the need for a printout in nearly any situation are flooding the market. Cellfire offers mobile coupons you can pull up on your phone and show to a clerk; now makes digital concert passes available via cell phone through its [email protected] service. The final frontier, though, remains the airline boarding pass, which has resisted this next paperless step since the advent of Web-based check-in.

When is it coming? Some cell-phone apps that replace paper are here now (just look at the ones for the iPhone), and even paperless boarding passes are creeping forward. Continental has been experimenting with a cell-phone check-in system that lets you show an encrypted, 2D bar code on your phone to a TSA agent in lieu of a paper boarding pass. The agent scans the bar code with an ordinary scanner, and you're on your way. Introduced at the Houston Intercontinental Airport, the pilot project became permanent earlier this year, and Continental rolled it out in three other airports in 2008. The company promises more airports to come.



Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.

The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.

One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.

We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.

Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.

If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.

This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.

These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.

Fuzzy sets and AI

One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.

In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.

But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.

The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).

For example, our human-manager from the earlier example may say to human resources:

• IF the job applicant is aged 25 to 30

• AND has a qualification in philosophy OR literature

• THEN arrange an interview. This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.

But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:

• IF the applicant is tall

• AND attractive

• THEN the salary offered should be higher. The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?

Attractiveness is also subjective, even if we only account for the taste of the single human-manager.

Grey areas and fuzzy sets

In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.

One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.

To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).

Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.

Human–robot symbiosis

An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.

For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.

In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.

This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).

Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.

In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.

An irrational fear?

This brings us back to the big fear of AI machines turning on us in the future.

The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.

The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.

These risks, however, are the same that come with any human-made system or object.

So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.

Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.

Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.

When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.


Thursday, 27 October 2016 06:05

Finding patterns in corrupted data

Written by

New model-fitting technique is efficient even for data sets with hundreds of variables.

Data analysis — and particularly big-data analysis — is often a matter of fitting data to some sort of mathematical model. The most familiar example of this might be linear regression, which finds a line that approximates a distribution of data points. But fitting data to probability distributions, such as the familiar bell curve, is just as common. If, however, a data set has just a few corrupted entries — say, outlandishly improbable measurements — standard data-fitting techniques can break down. This problem becomes much more acute with high-dimensional data, or data with many variables, which is ubiquitous in the digital age.

Since the early 1960s, it’s been known that there are algorithms for weeding corruptions out of high-dimensional data, but none of the algorithms proposed in the past 50 years are practical when the variable count gets above, say, 12.

That’s about to change. Earlier this month, at the IEEE Symposium on Foundations of Computer Science, a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, the University of Southern California, and the University of California at San Diego presented a new set of algorithms that can efficiently fit probability distributions to high-dimensional data.

Remarkably, at the same conference, researchers from Georgia Tech presented a very similar algorithm.

The pioneering work on “robust statistics,” or statistical methods that can tolerate corrupted data, was done by statisticians, but both new papers come from groups of computer scientists. That probably reflects a shift of attention within the field, toward the computational efficiency of model-fitting techniques.

“From the vantage point of theoretical computer science, it’s much more apparent how rare it is for a problem to be efficiently solvable,” says Ankur Moitra, the Rockwell International Career Development Assistant Professor of Mathematics at MIT and one of the leaders of the MIT-USC-UCSD project. “If you start off with some hypothetical thing — ‘Man, I wish I could do this. If I could, it would be robust’ — you’re going to have a bad time, because it will be inefficient. You should start off with the things that you know that you can efficiently do, and figure out how to piece them together to get robustness.”

Resisting corruption

To understand the principle behind robust statistics, Moitra explains, consider the normal distribution — the bell curve, or in mathematical parlance, the one-dimensional Gaussian distribution. The one-dimensional Gaussian is completely described by two parameters: the mean, or average, value of the data, and the variance, which is a measure of how quickly the data spreads out around the mean.

If the data in a data set — say, people’s heights in a given population — is well-described by a Gaussian distribution, then the mean is just the arithmetic average. But suppose you have a data set consisting of height measurements of 100 women, and while most of them cluster around 64 inches — some a little higher, some a little lower — one of them, for some reason, is 1,000 inches. Taking the arithmetic average will peg a woman’s mean height at 6 feet 4 inches, not 5 feet 4 inches.

One way to avoid such a nonsensical result is to estimate the mean, not by taking the numerical average of the data, but by finding its median value. This would involve listing all the 100 measurements in order, from smallest to highest, and taking the 50th or 51st. An algorithm that uses the median to estimate the mean is thus more robust, meaning it’s less responsive to corrupted data, than one that uses the average. The median is just an approximation of the mean, however, and the accuracy of the approximation decreases rapidly with more variables. Big-data analysis might require examining thousands or even millions of variables; in such cases, approximating the mean with the median would often yield unusable results.

Identifying outliers

One way to weed corrupted data out of a high-dimensional data set is to take 2-D cross sections of the graph of the data and see whether they look like Gaussian distributions. If they don’t, you may have located a cluster of spurious data points, such as that 80-foot-tall woman, which can simply be excised.

The problem is that, with all previously known algorithms that adopted this approach, the number of cross sections required to find corrupted data was an exponential function of the number of dimensions. By contrast, Moitra and his coauthors — Gautam Kamath and Jerry Li, both MIT graduate students in electrical engineering and computer science; Ilias Diakonikolas and Alistair Stewart of USC; and Daniel Kane of USCD — found an algorithm whose running time increases with the number of data dimensions at a much more reasonable rate (or, polynomially, in computer science jargon).

Their algorithm relies on two insights. The first is what metric to use when measuring how far away a data set is from a range of distributions with approximately the same shape. That allows them to tell when they’ve winnowed out enough corrupted data to permit a good fit.

The other is how to identify the regions of data in which to begin taking cross sections. For that, the researchers rely on something called the kurtosis of a distribution, which measures the size of its tails, or the rate at which the concentration of data decreases far from the mean. Again, there are multiple ways to infer kurtosis from data samples, and selecting the right one is central to the algorithm’s efficiency. The researchers’ approach works with Gaussian distributions, certain combinations of Gaussian distributions, another common distribution called the product distribution, and certain combinations of product distributions. Although they believe that their approach can be extended to other types of distributions, in ongoing work, their chief focus is on applying their techniques to real-world data.


Artificial Intelligence Is the Most Important Technology of the Future Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications. Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR). It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently byTheodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound. Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out. That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.


We have seen great leaps in digital technology in past the past five years.Smartphones, cloud computing, multi-touch tablets, these are all innovations that revolutionized the way we live and work. However, believe it or not, we are just getting started. Technology will get even better. In the future, we could live like how people in science fiction movies did.

Today’s post is about 10 upcoming, real-life products that is set to revolutionize the world as we know it. Get ready to control the desktop and slice Ninja fruits with your eyes. Get ready to print your own creative physical product. Get ready to dive into the virtual world, and interact with them. Come unfold the future with us.

1. Google Glass

Augmented Reality has already gotten into our life in the forms of simulated experiment and education app, but Google is taking it several steps higher withGoogle Glass. Theoretically, with Google Glass, you are able to view social mediafeeds, text, Google Maps, as well as navigate with GPS and take photos. You will also get the latest updates while you are on the ground.

• (Image Source: YouTube)

It’s truly what we called vision, and it’s absolutely possible given the fact that the Google’s co-founder, Sergey Brin has demo’ed the glass with skydivers and creatives. Currently the device is only available to some developers with the price tag of $1500, but expect other tech companies trying it out and building an affordable consumer version.

2. Form 1

Just as the term suggests, 3D printing is the technology that could forge your digital design into a solid real-life product. It’s nothing new for the advanced mechanical industry, but a personal 3D printer is definitely a revolutionary idea.

Everybody can create their own physical product based on their custom design, and no approval needed from any giant manufacturer! Even the James Bond’s Aston Martin which was crashed in the movie was a 3D printed product!

• (Image Source: Kickstarter)

Form 1 is one such personal 3D printer which can be yours at just $2799. It may sound like a high price but to have the luxury of getting producing your own prototypes, that’s a reaonable price.

Imagine a future where every individual professional has the capability to mass produce their own creative physical products without limitation. This is the future where personal productivity and creativity are maximized.

3. Oculus Rift

Virtual Reality gaming is here in the form of Oculus Rift. This history-defining 3D headset lets you mentally feel that you are actually inside a video game. In the Rift’s virtual world, you could turn your head around with ultra-low latency to view the world in high resolution display.

There are premium products in the market that can do the same, but Rift wants you to enjoy the experience at only $300, and the package even comes as a development kit. This is the beginning of the revolution for next-generation gaming.

• (Image Source: Kickstarter)

The timing is perfect as the world is currently bombarded with the virtual reality topic that could also be attributed to Sword Art Online, the anime series featuring the characters playing games in an entirely virtual world. While we’re getting there, it could take a few more years to reach that level of realism. Oculus Rift is our first step.

4. Leap Motion

Multi-touch desktop is a (miserably) failed product due to the fact that hands could get very tired with prolonged use, but Leap Motion wants to challenge this dark area again with a more advanced idea. It lets you control the desktop with fingers, but without touching the screen.

• (Image Source: Leap Motion)

It’s not your typical motion sensor, as Leap Motion allows you to scroll the web page, zoom in the map and photos, sign documentss and even play a first person shooter game with only hand and finger movements. The smooth reaction is the most crucial key point here. More importantly, you can own this future with just $70, a price of a premium PS3 game title!

If this device could completely work with Oculus Rift to simulate a real-time gaming experience, gaming is going to get a major make-over.

5. Eye Tribe

Eye tracking has been actively discussed by technology enthusiasts throughout these years, but it’s really challenging to implement. But Eye Tribe actually did this. They successfully created the technology to allow you to control your tablet, play flight simulator, and even slice fruits in Fruit Ninja only with your eye movements.

• (Image Source: Eye Tribe)

It’s basically taking the common eye-tracking technology and combining it with a front-facing camera plus some serious computer-vision algorithm, and voila, fruit slicing done with the eyes! A live demo was done in LeWeb this year and we may actually be able to see it in in action in mobile devices in 2013.

Currently the company is still seeking partnership to bring this sci-fi tech into the consumer market but you and I know that this product is simply too awesome to fail.

6. SmartThings

The current problem that most devices have is that they function as a standalone being, and it require effort for tech competitors to actually partner with each other and build products that can truly connect with each other. SmartThings is here to make your every device, digital or non-digital, connect together and benefit you.

• (Image Source: Kickstarter)

With SmartThings you can get your smoke alarms, humidity, pressure and vibration sensors to detect changes in your house and alert you through your smartphone! Imagine the possibilities with this.

You could track who’s been inside your house, turn on the lights while you’re entering a room, shut windows and doors when you leave the house, all with the help of something that only costs $500! Feel like a tech lord in your castle with this marvel.

7. Firefox OS

iOS and Android are great, but they each have their own rules and policies that certainly inhibit the creative efforts of developers. Mozilla has since decided to build a new mobile operating system from scratch, one that will focus on true openness, freedom and user choice. It’s Firefox OS.

Firefox OS is built on Gonk, Gecko and Gaia software layers – for the rest of us, it means it is built on open source, and it carries web technologies such as HTML5and CSS3.

• (Image Source: Mozilla)

Developers can create and debut web apps without the blockade of requirements set by app stores, and users could even customize the OS based on their needs. Currently the OS has made its debut on Android-compatible phones, and the impression so far, is great.

You can use the OS to do essential tasks you do on iOS or Android: calling friends, browsing web, taking photos, playing games, they are all possible on Firefox OS, set to rock the smartphone market.

8. Project Fiona

Meet the first generation of the gaming tablet. Razer’s Project Fiona is a serious gaming tablet built for hardcore gaming. Once it’s out, it will be the frontier for the future tablets, as tech companies might want to build their own tablets, dedicated towards gaming, but for now Fiona is the only possible one that will debut in 2013.

• (Image Source: Razer™)

This beast features next generation Intel® Core i7 processor geared to render all your favorite PC games, all at the palm of your hands. Crowned as the bestgaming accessories manufacturer, Razer clearly knows how to build user experience straight into the tablet, and that means 3-axis gyro, magnetometer, accelerometer and full-screen user interface supporting multi-touch. My body and soul are ready.

9. Parallella

Parallella is going to change the way that computers are made, and Adaptevaoffers you chance to join in on this revolution. Simply put, it’s a supercomputer for everyone. Basically, an energy-efficient computer built for processing complex software simultaneously and effectively. Real-time object tracking, holographic heads-up display, speech recognition will become even stronger and smarter with Parallella.

• (Image Source: YouTube)

The project has been successfully funded so far, with an estimated delivery date of February 2013. For a mini supercomputer, the price seems really promising since it’s magically $99! It’s not recommended for the non-programmer and non-Linux user, but the kit is loaded with development software to create your personal projects.

I never thought the future of computing could be kick-started with just $99, which is made possible using crowdfunding platforms.

10. Google Driverless Car

I could still remember the day I watch the iRobot as a teen, and being skeptical about my brother’s statement that one day, the driverless car will become reality. And it’s now a reality, made possible by… a search engine company, Google.

While the data source is still a secret recipe, the Google driverless car is powered by artificial intelligence that utilizes the input from the video cameras inside the car, a sensor on the vehicle’s top, and some radar and position sensors attached to different positions of the car. Sounds like a lot of effort to mimic the human intelligence in a car, but so far the system has successfully driven 1609 kilometres without human commands!


• (Image Source: Wikipedia)

“You can count on one hand the number of years it will take before ordinary people can experience this.” Google co-founder, Sergey Brin said. However, innovation is an achievement, consumerization is the headache, as Google currently face the challenge to forge the system into an affordable gem that every worker with an average salary could benefit from.


Project Noto, one of Google's most ambitious undertakings ever, has reached a milestone. Noto now supports 800 languages and 100 writing scripts, the companies announced last week.

Google and Monotype launched the open source initiative to create a typeface family that supports all the languages in the world, even rarely used languages.

Both serif and sans serif letters with up to eight weights are supported, as well as numbers, emoji, symbols and musical notation.

"Noto" is short for "no tofu." Outside the font world, tofu is something to eat -- but to insiders, it's those annoying rectangles that appear on the screen when a computer doesn't have a font to display the characters in a document or on a Web page. The name "Noto" conveys Google's goal of eliminating them.

All Noto fonts can be downloaded for free.

Keeping Information Alive

Hundreds of researchers, designers, linguists, cultural experts and project managers around the world have been involved with Noto.

Work on the Noto fonts is ongoing as new scripts are added to the Unicode Standard -- a character coding system designed to support the worldwide interchange, processing and display of written texts representing the diverse languages and technical disciplines of the modern world, according to the Unicode Consortium.

Essentially, the standard aims to establish a unique combination of numbers for every character in every written language ever written.

"The aim of the Noto project is to provide digital representation to all the scripts in the Unicode Standard," said Kamal Mansour, a linguistic typographer at Monotype. "That in particular is something that many different language communities could not afford to do on their own."

Stones to Sand

As happens with Google projects from time to time, Noto began as an internal project.

"Our goal for Noto has been to create fonts for our devices, but we're also very interested in keeping information alive," said Bob Jung, director of internationalization at Google.

Google believes it's really important to preserve even dead languages, he added, and "without the digital capability of Noto, it's much more difficult to preserve that cultural resource."

When adding languages to Noto, priority is given to widely used languages, but it's important to support other languages, too, even if no one is still speaking them, said Google Product Manager Xiangye Xiao.

"There are some characters you can only see on stones," she explained. "If you don't move them to the Web, over time those stones will become sand, and we'll never be able to recover those drawings or that writing."

Suit in the Closet

Tofu isn't just an obstacle to language preservation -- it's an obstacle to business, as well.

"Because there isn't a common font that works across all languages and use cases, you run into the problem of a default font in one culture not being acceptable for business use in another culture," said Paul Teich, a principal analyst with Tirias Research.

"What Google has done is design a clean, modern, least-common-denominator business and education font across cultures and languages. It's the suit that always stays in your closet," he told TechNewsWorld.

"It lets Google take its browser Chrome to any device and serve up a font that exists everywhere because they created it," added Teich. "It can become a lingua franca, simply because it has a character for every language."

Aesthetically, the font is fairly basic, observed John Caserta, an associate professor at the Rhode Island School of Design.

"A lot of free fonts lack character and risk," he told TechNewsWorld.

"Graphic designers are going to want find a typeface that's more unique to create a brand for a company," Caserta said. "For websites that don't need a distinct look, these Noto fonts will work fairly well."


About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018