Artificial intelligence and small business

Artificial intelligence promises to change small business dramatically, but are they ready for it?

How can small businesses use Artificial Intelligence? On Flying Solo, Rob Gerrish and I discuss the various ways AI is going to affect smaller enterprises.

One of the important things about the discussion is how AI is going to change a range of industries and jobs. The effect on small businesses over the next twenty years will be as great at the Personal Computer was.

The big takeaway I have for business owners is to actively think about how AI and automation are going to affect their industries, customers and individual companies.

Have a listen.

When artificial intelligence becomes pervasive

Artificial intelligence is shaping up to the next battleground between vendors, but users won’t really care as long as it works.

Once upon a time computers were unusual, getting time on one was only for select employees of large corporations and scientists. Famously IBM’s Tom Watson forecast there would only be a need for five computers, although it seems he never said that.

Today we’re surrounded by computers in everything from our cars and phones to our teapots and razors and now we’re considering how those devices will affect our future workforce.

At the core of the discussion about computers and the future of work, is artificial intelligence. What’s notable though is it’s unlikely that AI is going to be an competitive advantage for technology vendors as the functions become built in.

This is already being seen with Microsoft building AI into its databases and increasingly the intelligence is going to built into the chips themselves.

In our recent interview with Xero founder Rod Drury, he flagged how AI is going to drive small business accounting. Drury was speaking at the Sydney AWS summit where the hosting company was showing off many of its AI driven services.

While artificial intelligence is going to be embedded and almost invisible to the user, it is going to be important. A good example is Google’s struggle to maintain quality and honesty in its local search results, a process that is beyond the company’s resources if done manually.

For the software vendors, the quality of their AI features is going to be one of their key selling points. This is why AWS, Amazon and almost company in the industry is announcing their own initiatives. Google itself should be one of the leaders in this field.

As automation becomes increasingly taken for granted, artificial intelligence is going to be seen as a fundamental, and invisible, part of computing.

While AI is going to be essential for the technology vendors, for users we won’t notice it as long as it works properly.

Building the artificially intelligent business

Artificial intelligence and machine learning are a great opportunity for small business says Xero founder Rod Drury

It’s been another big year for Xero after the company passed its million user milestone, at the recent AWS Summit in Sydney founder Rod Drury to spoke to Decoding the New Economy about what’s next for the company and for small businesses.

For a company founded a decade ago, having a million paying customers is a substantial milestone and one Drury seems quite bemused by.

“It hasn’t really sunk in yet. When we did our IPO our promise was a hundred customers and I can remember when it was our first year our target was twelve hundred customers – I think we got to 1300 – so to pass a million is pretty nuts.

“What we’ve found is the accounting software market is probably one of the key industries where you’ll see the benefits of machine learning and AI. The reason for that is massive amounts of data but a pretty tight and structured taxonomy so we processed 1.2 trillion pieces of data in the last 12 months so the graph of data is huge.”

Far more modest volumes of data threaten to overwhelm smaller businesses and this is where Drury sees Artificial Intelligence and machine learning as essential for simplifying services and driving user adoption.

“One of the challenges is that small businesses might be great landscape gardeners or plumbers but they are terrible at actually coding transactions so we’re now seeing that wisdom of the crowd and all that data that we can code better than most normal people can. So the big epiphany was ‘why don’t we get rid of coding?’

“Effectively all a small business has to do make sure things like the data of the invoice is in the system and we can do the accounting for them and the accountants can check and see what’s going on.”

This automation of basic accounting tasks, and how these features are now embedded in cloud computing offerings, is changing how businesses – particularly software companies – are operating.

“You can’t run domestic platforms any more, because every accountant will have customers that are exporting and what we’re seeing now is global platforms connecting together so, for example, HSBC announced its bank feeds and what we’re doing with Stripe and Square.

All of the accountants need to be coaching the small businesses exporting. That’s what creates jobs.”

That global focus of business is now changing companies grow, particularly those from smaller or remote economies like Australia and New Zealand.

“What we’re finding now is the last generation of the late 90s and early 2000s was very much enterprise technology and normally companies would get to a certain point and then a US public company would have to buy them.

“Now we’re seeing truly global businesses that aren’t selling out quickly they’re actually creating businesses from this part of the world. People don’t have to live in Silicon Valley anymore, they can live in Sydney’s Northern Beaches or Auckland or Wellington and do world class work.

That remoteness is something that challenges Xero though as the company tries to get traction in the US market which is dominated by Intuit and fragmented across regional and industry lines.

“As you start off as a company listed in Australia and New Zealand it’s harder as you don’t get the benefit of the density in a smaller market. Now we’ve done enough to get these bank deals, we can now attract executives of the calibre that feels like long term leadership and that’s the benefit of doing the hard yards for a few years.

We’re past the beach head phase now and now we’re building the long term business. We want to be a big fish in a small pond.”

Overall Drury sees the cloud, particularly Amazon Web Services, as being one of the great liberators for business as smaller companies follow Xero’s footsteps.

“This is one of the amazing things AWS have done, they’ve created this flat global playing field.”

Automating out white collar jobs

The effects of automation are difficult to predict but the machines are coming for management and white collar roles.

 

The statistics continue to come about the challenging future of work with the Harvard Business Review looking at how artificial intelligence is changing the role of knowledge workers and the World Economic Forum reports how Japan is already well down the track of automating many ‘white collar’ roles.

A couple of decades or so back, the assumption was ‘knowledge work’ represented the future of employment and the thought of management being replaced by computers or robots was unthinkable.

That hasn’t proved to be so as the low end jobs, which we thought would be taken up by displaced industrial workers were offshored, subject to a ‘race to the bottom’ in pay rates and, now, are increasingly becoming automated.

While the robots first came for call centre workers, it’s quite likely the next wave of will affect white colour workers reports Dan Tynan in The Guardian who has an overview of some of the likely fates of various occupations.

A good example of the shift, are lawyers with Tynan citing the company DoNotPay which uses AI to help customers fight traffic infringements as an example of the legal profession being automated out.

Bad for young lawyers

This though isn’t new in the legal profession. Over the past twenty years many roles in fields such as property conveyancing and contract drafting have been offshored, so much so that junior lawyer’s payrates and job prospects have collapsed as entry level jobs have dried up.

How the legal profession has used automation and offshoring is a good indicator of how these tradition industries are evolving, now a senior lawyer can handle more work and the need for juniors and paralegals is reduced. The work stays with the older worker while younger workers need to look elsewhere.

While Tynan discounts the effects of automation on the construction and health industries, those sectors are similarly being changed. Robot bricklayers, for example, allow older workers to stay in the industry longer and increase productivity.

The internet of things and artificial intelligence are similarly taking the load of nurses and doctors while making diagnostics faster and easier with major ramifications of these industries.

Dirty data

There are weaknesses in a data driven world and this gives us clues to where the future jobs may lie, the Harvard Business Review optimistically notes many roles can “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data,” however obtaining ‘cleanly formatted data’ is a challenge for many organisations and managing exceptions, or ‘dirty data’ feeds, shouldn’t be underestimated.

Unexpected consequences exist as well, the media industry being a good example. While the demand for content has exploded, the rise of user generated content on social media and the collapse of advertising models has upended publishing, writing and journalism. While artificial intelligence and animation can replace actors and reporters, it hasn’t done so in a major way yet.

How industry sectors will be affected by automation is something the US Bureau of Labor Statistics looked at in 2010.

The roles which the US BLS estimates may be less affected by automation may be more affected than we think – how the retail and media industries changed in the twentieth century is instructive where the models at the beginning of the century were upended but by the end of the millennium employment in those sectors was higher than ever.

The future of work isn’t obvious and the effects of automation bring a range of unforeseen consequence and opportunities – this is why we can’t rest on our laurels and assume our jobs, trades and professions will be untouched by change.

Rethinking artificial intelligence and the smarthome

Facebook founder and CEO Mark Zuckerberg spent 2016 experimenting with artificial intelligence in his smarthome and came to some interesting conclusions about AI and machine learning

What happens when the founder and CEO of one of the world’s biggest tech companies decides to create a genuinely smart home? Facebook’s Mark Zuckerberg spend 2016 finding out.

“My goal was to learn about the state of artificial intelligence — where we’re further along than people realize and where we’re still a long ways off,” Zuckerberg writes in a blog post.

The immediate problem Zuckerberg faced in creating his home made Jarvis automation system was many household appliances are not network ready and for those that are,  the proliferation of standards makes tying them together difficult.

For assistants like Jarvis to be able to control everything in homes for more people, we need more devices to be connected and the industry needs to develop common APIs and standards for the devices to talk to each other.

Having jerry rigged a number of workarounds, including a cannon to fire his favourite t-shirts from the wardrobe and retrofitting a 1950s toaster to make his breakfast, Zuckerberg then faced another problem – the user interface.

While voice is presumed to be the main way people will control the smart homes of the future, it turns out that text is a much less obtrusive way to communicate with the system.

One thing that surprised me about my communication with Jarvis is that when I have the choice of either speaking or texting, I text much more than I would have expected. This is for a number of reasons, but mostly it feels less disturbing to people around me. If I’m doing something that relates to them, like playing music for all of us, then speaking feels fine, but most of the time text feels more appropriate. Similarly, when Jarvis communicates with me, I’d much rather receive that over text message than voice. That’s because voice can be disruptive and text gives you more control of when you want to look at it.

Given the lead companies like Amazon, Microsoft, Google and Apple have over Facebook in voice recognition, it’s easy to dismiss Zuckerberg’s emphasis on text, but his view does feel correct. Having a HAL type voice booming through house isn’t optimal when you have a sleeping partner, children or house guests.

Zuckerberg’s view also overlooks other control methods, Microsoft and Apple have been doing much in the realm of touch interfaces while wearables offer a range of possibilities for people to communicate with systems.

The bigger problem Zuckerberg identifies is with Artificial Intelligence itself. At this stage of its development AI struggles to understand context and machine learning is far from mature.

Another interesting limitation of speech recognition systems — and machine learning systems more generally — is that they are more optimized for specific problems than most people realize. For example, understanding a person talking to a computer is subtly different problem from understanding a person talking to another person.

Ultimately Zuckerberg concludes that we have a long way to go with Artificial Intelligence and while there’s many things we’re going to be able to do in the near term, the real challenge lies in understanding the learning process itself, not to mention the concept of intelligence.

In a way, AI is both closer and farther off than we imagine. AI is closer to being able to do more powerful things than most people expect — driving cars, curing diseases, discovering planets, understanding media. Those will each have a great impact on the world, but we’re still figuring out what real intelligence is.

Perhaps we’re looking at the what intelligence and learning from a human perspective. Maybe we to approach artificial intelligence and machine learning from the computer’s perspective – what does intelligence look like to a machine?

Creating a Silicon Brain

Should we be rethinking how computers are designed? The co-founder and CEO of chip designer Nervana, Naveen Rao, believes we should look to the brain.

Should we be rethinking how computers are designed? The co-founder and CEO of chip designer Nervana, Naveen Rao, believes so as artificial intelligence applications change the way systems work.

“A brain only uses 20 watts of power to do far more than a laptop,” observes Naveen Rao at a breakfast following Intel’s Artificial Intelligence Day in San Francisco last week.

“Presumably the brain is doing more computation than your laptop,” he continues. “What are we missing? Why is there such a big difference between what a computer can do and a brain can do. Let’s try to understand that and maybe what we learn can change how we design computers.”

A lifetime passion

Rao, whose company was acquired by Intel for over four hundred million dollars last August, was discussing the quest to make computers operate more like brains and less like adding machines.

For Rao this has been a lifetime passion, having graduated as an electrical engineer and spending most of his career designing computer chips at Sun Microsystems and various startups he quit his job to do a PhD in neuroscience, “after ten years, I wanted to return to my passion of trying to use biology to better understand computers.”

From that combination of study and experience Nervana was founded in 2014 and raised twenty million dollars from investors before being acquired by Intel.

Replicating the bird, not the feathers

The key part in creating a computer that acts more like a brain is to get the individual CPUs to be working together in a network similar to the mind’s neural paths, “look at a bird compared to a plane.” Rao says,” we don’t replicate the feathers, but we do the function.”

Doing this meant rethinking how processors are designed, “there are tried are true methods of chip architecture that we basically questioned.”

“We don’t need high levels of generality. We don’t need this to work on energy or weather simulations. We removed some of that baggage.”

Paring back the processor

So the Nervana team stripped down the individual processor and removed many functions, such as a cache, that are built into today’s advanced CPUs. Those lighter weight, and less power hungry, units can then be combined into neural networks more suited to artificial intelligence functions than today’s computers.

“Nvidea, this sort of fell into their laps,” observes Rao of Intel’s key competitor in the AI, graphics and gaming space. “It just so happens the graphics functions on their chips are suited to Artificial Intelligence applications.”

Without the more complex functions of modern CPUs, Rao and the Nervana team see the opportunity to build more flexible computers better suited to artificial intelligence applications.

Intel focuses on AI

That focus on AI has seen Intel branding its AI initiatives under the Nervana brand name as the iconic Silicon Valley company tries to move ahead with more nimble competitors like Qualcomm and NVidea.

For the computer industry, artificial intelligence promises to be the next major advance, something necessary if we are ever going to make sense of the masses of data being collected by smart devices and the reason why Microsoft, Google, Amazon and Facebook are all making massive investments in the field.

Regardless of whether Intel and Nervana are successful in the AI marketplace, Rao sees the entire field of neural computing as a great opportunity. “It’s exciting, there’s lots of chances to innovate.”

Paul travelled to San Francisco as a guest of Intel

 

Microsoft quietly buries its smartphone ambitions

Microsoft quietly exits from the smartphone industry hopefully to focus on cloud computing and artificial intelligence.

Last week Microsoft quietly buried its smartphone ambitions with the announcement they would shed 1,850 jobs largely from the remains of the Nokia business they acquired four years ago.

Microsoft’s Lumia exercise was expensive for the company but even more costly in terms of missed opportunities.

Those opportunities are now in cloud computing and artificial intelligence services. Shareholders will be hoping the current CEO Satya Nadell executes a lot better on them than his predecessor did with smartphones.

IBM and the era of cognitive computing

IBM CEO Ginni Rometti describes the future of business being cognitive computing – but will her customers be part of that future?

“If you’re digital now, you’ll be cognitive tomorrow” says Ginni Rometti, the head of IBM.

Rometti was talking at the Sydney IBM Think forum today where she laid out the vision of IBM’s role in the data rich organisation of the future,

IBM’s pitch is that services like their Watson artificial intelligence platform is a key part of business as companies try to differentiate themselves in the new economy.

While Rometti’s view is correct, the question is whether IBM are the company to do this. The audience in Sydney were largely incumbent corporations and government agencies, it was almost sad that some of the panelists citing their digital smarts were from Australian businesses that have been tragically leaden in responding to changes to their markets over the last two decades.

In the first panel Rometti was joined by Andrew Thorburn and Richard Umbers the respective CEOs of the National Australia Bank and the Myer department store chain.

Thornburn’s comments about NAB being an agile fintech company were somewhat at odds with the reality of Australia’s housing addicted banking sector but Umbers’ view that Myer is leading the way in customer experience is almost laughable given how his company has missed almost every development in retail over the past twenty years.

Leaden corporations are Rometti’s core customers however – it still remains true that no-one at companies like Myer and NAB gets sacked for buying IBM.

“We’ve been part of your past, and I hope we can be part of your future” was Rometti’s conclusion of her keynote. It remains to be seen whether her customers are part of the future.

Microsoft and the AI future

Microsoft’s continued push into artificial intelligence is part of an economy wide shift

Despite the embarrassment of their foul mouthed racist bot, Microsoft are pressing on with a move into artificial intelligence.

Ahead of this week’s Launch event in San Francisco, Microsoft’s CEO Satya Nadella laid out his vision for the company’s Artificial Intelligence efforts in describing a range of ‘bots’ that carry out small tasks.

Bloomberg tagged Nadella’s vision as ‘the spawn of clippy’, referring to the incredibly irritating help assistant Microsoft included with Office 97.

Tech site The Register parodied Clippy mercilessly in their short lived IT comedy program Salmon Days, as shown in this not safe for work trailer. While The Reg staff were brutal in their language and treatment of Clippy, most Microsoft Office users at the time shared their feelings.

While Clippy may be making a comeback at Microsoft, albeit in a less irritating form, other companies are moving ahead with AI in the workplace.

Robot manufacturer Fanuc showed off their self learning machine a few weeks ago which shows just how deeply AI is embedding itself in industry. Already there are many AI apps in software like Facebook’s algorithm and Google’s search functions with the search engine’s engineers acknowledging they aren’t quite sure what the robots are up to.

For organisations dealing with massive amounts of data, artificial intelligence based programs are going to be essential in dealing with unexpected or fast moving events. Those programs will also affect a lot of occupations we currently think are immune from workplace automation.

 

Exploring the downsides of artificial intelligence

Microsoft’s racist bot shows the limits and dangers of artificial intelligence

Microsoft Research ran an experiment last week on their artificial intelligence engine where they set a naive robot to learn from it was told on Twitter.

Within two days Tay, as they named the bot, had become an obnoxious racist as Twitter user directed obnoxious comments at the account.

Realising the monster they had created, Microsoft shut the experiment down. The result is less than encouraging for the artificial intelligence community.

Self learning robots may have a lot of power and potential, but if they’re learning from humans they may pick up bad habits. We need to tread carefully with this.

What happens when machines start to learn

Deep reinforcement learning promises to change the way robots are taught to do tasks

Computer programming is one of the jobs of the future. Right?

Maybe not, as Japanese industrial robot maker Fanuc demonstrates with their latest robot that learns on the job.

The MIT Technology Review describes how the robot analyses a task and fine tunes its own operations to do the task properly.

Fanuc’s robot uses a technique known as deep reinforcement learning to train itself, over time, how to learn a new task. It tries picking up objects while capturing video footage of the process. Each time it succeeds or fails, it remembers how the object looked, knowledge that is used to refine a deep learning model, or a large neural network, that controls its action.

While machines running on deep reinforcement learning won’t completely make programmers totally redundant, it shows basic operations even in those fields are going to be increasingly automated. Just knowing a programming language is not necessarily a passport to future prosperity.

Another aspect flagged in the MIT article is how robots can learn in parallel, so groups can work together to understand and optimise tasks.

While Fanuc and the MIT article are discussing small groups of similar computers working together it’s not hard to see this working on a global scale. What happens when your home vacuum cleaner starts talking to a US Air Force autonomous drone remains to be seen.

How artificial intelligence can outguess people

Google have developed a tool that determines a location from a photograph

It’s hard to spot locations from a photograph and it’s something people can’t do this very well. MIT’s Technology Review reports Google’s researchers have developed a tool that figures out the location of an image with twice the accuracy of humans.

To illustrate their point Google have their Geoguesser game that allows people to pit their knowledge against the computer.

While this could be seen as a gimmick, it again shows how computing power is being used in areas that were seen as being immune from technology not so long ago and how artificial intelligence will be applied in various fields.

For the moment, applying artificial intelligence to seemingly trivial fields like games gives researchers to opportunity to test it before being applied to areas like cancer treatment.

As artificial intelligence advances, a whole range of existing fields are going to be disrupted – particularly in ‘knowledge industry’ fields like law, consulting and management – while new industries and occupations will arise out of these technologies.