In 2018, AI has begun to define the human experience in the developed world.
Automated entities that learn and interact in ways that mimic human behavior have grown to saturate the consumer mobile services that are replacing so many legacy, bricks-and-mortar functions. Google’s Assistant and Apple’s Siri, dueling voice activated bots, have grown from vaguely creepy novelties into streamlined, painless ways to augment your daily life.
Google and Apple proudly placed AI at the core of their respective enterprises in 2017, and it’s a turning point worth noting. Freshly minted flagship devices, Pixel 2 and iPhone X, come loaded with entire bundles of AI-driven, device-specific services as the firms explore new approaches to integrating their activities far deeper into their users’ lives than ever before.
“We are clearly at an inflection point with vision.”
These words were spoken by Google CEO Sundar Pichai at their 2017 I/O developer’s conference, announcing their newest product, Lens.
Lens may illustrate this shift more clearly than the latest tech-loaded latest handsets, which may take off or founder, and will be replaced within the year. Lens will be around for a long time. It leverages computer vision and augmented reality techniques, using a device’s camera as its primary interface (among other sensors), allowing it to take Google Assistant’s services to a new dimension.
It will recognise objects that it ‘sees’, and try to help the user understand it. Recognising and identifying objects such as plants and flowers, providing contextual information for addresses or businesses, or logging into a WiFi network by interpreting a spoken passphrase are a few of the use cases described at the product launch.
Google wants us to interact with our increasingly-aware surroundings in ways that are not dependent on specific devices, and Lens is the gateway.
Applications of artificial intelligence are myriad within the tech industry. Facebook’s revenue stream critically relies on automated classification and clustering of precisely sliced user data to produce ever more finely targeted marketing services. Google applied machine learning algorithms in designing Google Home, its AI appliance designed to be spoken to and interacted with, to reduce the number of physical microphones from eight to two, a problem no human engineer was able to solve.
But the great strides in learning algorithms that have propelled this technology will have a deep and lasting impact on many other industries, too.
A report from economic strategy firm AlphaBeta in September found that Australia needs to double its investment in AI and automation to collect on a $2.2 trillion opportunity by 2030, and should be preparing to ‘cushion’ an estimated 3 million workers to be displaced by the rapidly shifting nature of work.
The key discipline that has fueled this explosion of AI-based services and applications is machine learning, and it’s a discipline that isn’t new.
The first conceptual milestone was laid in 1950 with Alan Turing’s “Turing Test”, and it seems resonant with contemporary jitters over AI’s increasing presence is our lives. Turing’s proposal was a theoretical test to determine whether a computer has real intelligence – it must trick a human into believing it is also human.
The first technical milestones followed in that decade; the development of the first learning algorithm with Arthur Samuel’s checkers-playing computer that improved as it played, and the first computer neural network, simulating the thought processes of the human brain – wonderfully named ‘The Perceptron’.
Tools that learn
These two developments paved the way to key growth areas that are now at the core of an entirely new set of paradigms that are already having a deep impact on the geospatial industry – and have the potential to disrupt far beyond.
Photogrammetry firm Pix4D have seized upon these advancements to offer striking new capabilities in analysis and classification from UAV-derived photographs. Three-dimensional point clouds, until fairly recently the sole province of LIDAR-sourced data, can be created from aerial images, with automatic classification to discern terrain, roads, buildings, vegetation and human-made objects such as lamps or cars.
To get stories like this delivered to your mailbox every week, subscribe to our weekly newsletter.
These capabilities lower the price of admission for practitioners offering these services – expensive LIDAR scanners can be replaced by UAVs, and automatic classifiers cut out steps in the workflow of producing a model.
“The technology today works optimally for urban or generic landscape datasets at typical UAV resolution – approximately 5cm GSD – but in principle it can be trained on and work over any type of scene, at any resolution,” Pix4D sales and marketing director Lorenzo Martelletti said in a statement to Position.
“For instance, Pix4D could create a classifier specialized in discriminating stockpiles of different materials in an open pit, or electrical pylons, cables and vegetation along a powerline.”
The Pix4D team developed their learning classification system to recognise these objects through a process known as supervised learning. This process is one of three key forms of machine learning that are instrumental in driving current development of services that have this principle at their core.
Dr. Sebastien Wong is director of machine learning at Consilium, an Adelaide-based firm specialising in modeling, simulation and machine intelligence. Consilium works across sectors, but they cut their teeth building machine intelligence systems for the Defence, Science and Technology group (DST Group), the defence department’s R&D group. DST Group are Consilium’s foundation client, for whom they developed the analytical and programmatic components that their subsequent work leverages and builds on. Dr. Wong breaks machine learning down very simply.
“Machine learning is data driven modelling,” he said. “Supervised learning is building models when you know exactly what the outcome should be, like building a classifier. You’ve got labels, so you know that this picture that you’re looking at – that this (part of a) satellite image, is showing you a car, or a road, or a tree.”
“Unsupervised is typically used when you want to group like things together – clustering – or when you are looking for a pattern that is different – you can use when you don’t have any labels. There’s a nice synergy between supervised and unsupervised machine learning – you can use unsupervised learning to learn what the important features are.”
Consilium have recently announced a partnership with satellite imagery giant DigitalGlobe, giving Consilium access to their Geospatial Big Data platform, GBDX, a cloud-based service with access to 100 petabytes of imagery. Dr. Wong is excited by the power of the services they are now able to offer to industry.
“World View 3 takes you up to 30 centimetre resolution. All of a sudden there’s applications that you can do commercially that previously were only available to defence, like finding an individual car –very detailed structures,” he said.
“So now instead of having one pixel with multiple objects in it, you have multiple pixels describing one whole object. With this high resolution now we can do two things – not just finding the properties of the material in there but actually what the object is, and you do pattern recognition, and machine learning type applications of that.”
A confluence of three major developments is key in driving the evolution of Consilium’s spatial analytics services, according to Dr. Wong. Current machine learning techniques; allowing new degrees of automation, cloud computing; removing the capital requirements for serious computational horsepower, and commercial APIs for satellite imagery, such as Google’s Earth Engine, and DigitalGlobe’s GBDX.
Dr. Wong says the result for consumers is a drastic increase in access. The hardware, the high resolution data and the skilled resources required for sophisticated spatial analysis such as asset inspections at mine sites; calculating distances and trajectory of trucks; and performing measurements of structures – is more easily attainable.
“Say you are a retiree and you’ve got your own property and you want to do the same thing a large commercial farmer — you just want to know how many kilometres of track you have, or how large an area you need to work on. Suddenly this information becomes democratised,” he said.
“When you automate things through machine learning, that same capability — suddenly become able to be purchased by a much smaller entity at a smaller cost,” he said.
Machines do the work
Learning algorithms are now being applied to remotely sensed data to help create richer data environments than were previously possible, which in turn enable new forms of modelling and experimentation.
PSMA Australia’s Geoscape is a colossal dataset of the Australian built environment, a rich and multi-layered data environment. Highly accurate address data, administrative boundary and cadastral layers, demographic data and zoning are just some of the dimensions of this set, combined with 2-metre pixels for all populated centres with a population over 200, and highly detailed surface cover and tree layers.
In creating the product, algorithms apply mass-scale feature extraction to high resolution imagery, also sourced from DigitalGlobe, to create vector building outlines from which 3D models are built.
Observed building outlines are initially drawn by hand, but then are fed into self-learning neural networks, which can then analyse clusters of pixels in other images to identify and extract the buildings.
A two-metre-square grid is applied to imagery of urban areas and remote communities and a 30-metre-square grid for the remainder of the Australian continent. Machine learning is used to interpret what is reflected in each cell of the grids, based on known patterns of electromagnetic radiation reflectance, and assign an appropriate value to each cell, thereby representing bare earth, roads, grass, water and buildings.
As impressive and versatile Geoscape is, some players are taking this multifaceted environment and using it merely as a basal step, a foundation to build upon.
Sensing Value is a dynamic intelligence firm that offers unique, cutting edge products to both private and public sectors. Among a suite of multidisciplinary modelling and analytics services, they take the rich, base environment of Geoscape and offer an almost untapped set of possibilities, in part enabled by the depth and currency it provides.
“What we are essentially working towards is a full virtualisation of built form and natural environment,” says David McCloskey, founding director and co-owner of Sensing Value.
McCloskey’s experience with machine learning began with the core concepts of the 1984 monograph ‘Classification & Regression Trees’, a foundational text in the field of modern machine learning. He attended courses by Dr. Dan Steinberg, president of Salford Systems, a firm established to provide a commercial application of the intellectual property developed by Leo Breiman, Charles Stone, Jerome Friedman and Richard Olshen in this paper.
McCloskey describes his relationship with these figures as ‘sitting at the feet of the giants’, and sees Sensing Value’s work as celebrating the brilliance of their work, and attempting to extend it into new areas. An upcoming project in the wetlands of Western Australia aims to able to model a unique set of measurements from the combination of Geoscape’s data, remote sensing and machine learning techniques.
The team begins with the 3D model of the current built form and tree cover around the site from Geoscape, and physically installs sensors into the stormwater drains on site, allowing them to measure the volume and velocity of specific rainfall events.
“Because the 3D models actually give us the hard surface area, the slope and the pitch of every roof, the drainage infrastructure – we can then actually develop a relationship between rainfall events, hard surface area and the flows into the lake,” McCloskey said.
Zoning attributes from Geoscape can then be factored in, and the research team are then able to model the likely hard surface area for the site, and how that might change over time – down to the individual land parcel level.
“We can then start to model the future impacts in terms of volume and velocity of flows of stormwater, which then let you work out remediation requirements, and impact on pollution in those areas,” he said.
From what is now an incredibly rich set of inputs, machine learning techniques can be applied to model far more detailed and complex phenomena, such as predicting algal bloom outbreaks, or the anticipated explosions of the midge population around the wetlands.
“By virtualising the built form, having a very granular structure and then – having a lot of the science of machine learning applied back to the environmental measures, we can actually build very strong and robust relationships from that,” McCloskey said.
McCloskey foresees a major set of impacts from the proliferation of machine learning techniques, though like Wong, he sees the coupling of these with unprecedented data access that will cause the lasting disruption. “I think we’re coming to a point where there’s going to be a divide between old science and new science. The experimental method and the scientific approach is the same, but the actual structure of the data that is available for the scientists to work with is going to be totally transformed,” he said.
He describes a hypothetical traditional data collection regime of monthly manual sample collections from a lake, stark against the potentially dizzying density and frequency of sensor data that can be streamed in real time. McCloskey says that the staggering disparity between the outputs of these scenarios will have a lasting impact on the way science is conducted, and should be seen as an opportunity to develop new methods that could cast new light on historical findings.
“So the old science which has only ever had one estimate at a particular time hasn’t considered the variance that is associated with those estimates. And the new science now has a massive variance,” he said.
“But the bigger picture is – what does this actually do to existing knowledge structures, the thinking and the parameters that we’ve developed in the past? How much can we augment or build on the work of people from 50, 100, 200 years ago – and actually build back. What new science can we develop which will give us a better understanding of complex system modelling?”
If we know one thing for certain, it’s that learning algorithms will be changing the way people live and work for the foreseeable future.