The history of machine learning goes back to the 1950s, when researchers were thinking about creating artificial intelligence that could learn from data. But it was not until recently that machine learning has experienced truly strong development and growth. Because we have entered the age of web-based technologies and moved away from the desktop, it has become the norm to produce, consume and control huge amounts of information every day.
Enabling instant data exchange and analysis has become a priority, and the need for automated data transfer has given a new rise for the advantages of machine learning.
Machine learning represents a method of data analysis which allows automating the process of building an analytical model. Basically, this means that algorithms repeatedly learn from data, using specific inputs, and build models that create predictions, but not strictly programmed instructions. When the model works with the new data, it learns to adjust based on previously received information, and makes it possible to produce new reliable results.
Since its beginning, machine learning has significantly changed, shifting away from the concept of artificial intelligence and concentrating on the development of probability theory models and methods that solve practical tasks. Today, machine learning is mostly focused on conducting big data calculations iteratively, fast and without human involvement. Contributing to this is the constantly growing amounts of data, data variety, affordable data storage, more powerful computational processing, and the need for complex analysis.
In real life, it's not always apparent how machine learning applications work and what they specifically do, but the results are part of everyday life. Among some of the examples of machine learning algorithms are web search results, email spam filters, fraud detection algorithms and even self-driving vehicles, such as Google cars.
Below you can see the forecast analysis made by BCC Research that shows how the smart machine market is changing, growing nearly 20 percent every year.
Even with the constant progress in machine learning, it takes highly specific skills to create tools that can support complicated and hard-to-predict flow of data. A new programming paradigm called probabilistic programming was created, to service the growing demand for processing complex and numerous data sets, to learn how to manage uncertain information and to make machine learning more powerful and effective.
Probabilistic programming was specifically designed to develop applications that would require less data for processing, but still provide accurate results. Basically, probabilistic programming languages perform such tasks as probability computation and parameter and program learning. These languages are applied in robotics, artificial intelligence technology, bioinformatics, and planning of probabilities.
Probabilistic programming language is based on applying of a combination of knowledge of the situation and probability laws. It is typically a high-level programming language that is designed to help the developer define probability models and makes it possible to resolve these models automatically. At the same time, a probabilistic programming language decreases the amount of time and effort needed to create similar new models and analyze data.
The key tasks probabilistic programming in machine learning solve consist of reducing the model code to simplify model understanding, writing code faster, enabling the development of more complicated machine learning models, and finally helping reduce the average level of expertise that is required of developers to create machine learning applications.
An example of a probabilistic programming language is Scala. This succinct programming language runs on the JVM and combines object-oriented and functional features, enabling productivity increases, compatibility with Java and working with multicore CPUs.
Scala supports advanced component architectures (such as classes and traits) and is most often used to develop server systems. It significantly reduces the size of code (compared to Java) while possessing all necessary performance advantages offered by Java.
Some experts say that Scala has the potential of becoming Java's successor. In 2012, Scala became the most popular JVM language. Among some of the well-known social networking companies that switched their backend and API development to Scala are Twitter, LinkedIn, Meetup, Coursera and the Guardian.
The introduction of probabilistic programming gave rise to the development of robotics, which has already outgrown the need for inference techniques and representational capacities of probabilistic graphical models. Probabilistic programming currently focuses on enabling natural language technology, and this is when the era of social robots will become a reality.
The newest entry into this class is JIBO, currently one of the most ambitious game-changing start-up projects delivering the first social companion robot. The project was launched in Boston in 2013 and raised more than $25 million from its supporters to-date.
The basic concept that lies behind JIBO is creating a consumer robot that can socialize, express emotions, and serve as a personal assistant that meets needs of the whole family. The robot has two high-resolution cameras installed that can see and differentiate people's faces and can respond naturally when talked to. JIBO is based on artificial intelligence algorithms which enable the robot's ability to learn about family members' personal preferences. JIBO is an open platform, so with time, the robot's skills will improve dramatically. JIBO is not the only technological experiment in robotics, but this specific example shows how communication and relationships between people and robots are being redefined, and how robots are getting closer to becoming an inseparable and affordable part of a daily life.