Big sports data – how tech is changing the playing field

The internet of things is dramatically changing the world of sports

“When you’re playing, it’s all about the winning but when you retire you realise there’s a lot more to the game,” says former cricketer Adam Gilchrist.

Gilchrist was speaking at an event organised by software giant SAP ahead of a Cricket World Cup quarter final at the Melbourne Cricket Ground yesterday.

SAP were using their sponsorship of the event to demonstrate their big data analytics capabilities and how they are applied to sports and the internet of things.

Like most industries, the sports world is being radically affected by digitalisation as new technologies change everything from coaching and player welfare through to stadium management and fans’ experience.

Enhancing the fan experience

Two days earlier rival Melbourne stadium Etihad in the city’s Docklands district showed off their new connected ground where spectators will get hi-definition video and internet services through a partnership between Telstra and Cisco.

While Etihad’s demonstration was specifically about ‘fan experience’, the use of the internet of things and pervasive wireless access in a stadium can range from paperless ticketing to managing the food and drink franchises.

In the United States, the leader in rolling out connected stadiums, venues are increasingly rolling out beacon technologies allowing spectators to order deliveries to their seats and push special offers during the game.

While neither of the two major Melbourne stadiums offer beacon services at present, the Cisco devices around the Etihad have the facility to add Bluetooth capabilities when the ground managements decide to roll them out.

Looking after players

Probably the greatest impact of technology in sport is with player welfare; while coaches and clubs have been enthusiastic adopters of video and tracking technologies for two decades, the rate of change is accelerating as wearable devices are changing game day tactics and how injuries are managed.

One of the companies leading this has been Melbourne business Catapult Sports which has been placing tracking devices on Australian Rules football players and other codes for a decade.

For coaches this data has been a boon as it’s allowed staff to monitor on field performance and tightly manage players’ health and fitness.

Professional sports in general have been early adopters of new technologies as a small increase in performance can have immediate and lucrative benefits on the field. Over the last thirty years clubs have adopted the latest in video and data technology to help coaches and players.

As the technology develops this adoption is accelerating, administrators are looking at placing tracking devices within the balls, goals and boundary lines to give even more information about what’s happening on the field.

Managing the data flow

The challenge for sports organisations, as with every other industry, is in managing all the data being generated.

In sports managing that data has a number of unique imperatives; gamblers getting access to sensitive data, broadcast rights holders wanting access to game statistics and stadium managers gathering their own data all raise challenges for administrators.

There’s also the question of who owns the data; the players themselves have a claim to their own personal performance data and there could potentially be conflicts when a competitor transfers between clubs.

As the sports industry explores the limits of what they can do with data, the world is changing for players, coaches, administrators and supporters.

Gilchrist’s observation that there’s a lot more to professional sports than just what happens on the field is going to become even more true as data science assumes an even greater role in the management of teams, clubs and stadiums.

Paul travelled to Melbourne as a guest of Cisco and SAP.

The high cost of distrust

A lack of trust in data is going to cost the world’s economy over a trillion dollars forecast a Cisco panel

A lack of trust in technology’s security could be costing the global economy over a trillion dollars a panel at the Australian Cisco Live in Melbourne heard yesterday.

The panel “how do we create trust?” featured some of Cisco’s executives including John Stewart, the company’s Security and Trust lead, along with Mike Burgess, Telstra’s Chief Information Security Officer and Gary Blair, the CEO of the Australian Cyber Security Research Institute.

Blair sees trust in technology being split into two aspects; “do I as an individual trust an organisation to keep my data secure; safe from harm, safe from breaches and so forth?” He asks, “the second is will they be transparent in using my data and will I have control of my data.”

In turn Stewart sees security as being a big data problem rather than rules, patches and security software; “data driven security is the way forward.” He states, “we are constantly studying data to find out what our current risk profile is, what situations are we facing and what hacks we are facing.”

This was the thrust of last year’s Splunk conference where the CISO of NASDAQ, Mark Graff, described how data analytics were now the front line of information security as threats are so diverse and systems so complex that it’s necessary to watch for abnormal activity rather than try to build fortresses.

The stakes are high for both individual businesses and the economy as technology is now embedded in almost every activity.

“If you suddenly lack confidence in going to online sites, what would happen?” Asks Stewart. “You start using the phone, you go into the bank branch to check your account.”

“We have to get many of these things correct, because going backwards takes us to a place where we don’t know how to get back to.”

Gary Blair described how the Boston Consulting Group forecast digital economy would be worth between 1.5 and 2.5 trillion dollars across the G20 economies by 2016.

“The difference between the two numbers was trust. That’s how large a problem is in economic terms.”

As we move into the internet of things, that trust is going to extend to the integrity of the sensors telling us the state of our crops, transport and energy systems.

The stakes are only going to get higher and the issues more complex which in turn is going to demand well designed robust systems to retain the trust of businesses and users.

Clawing back our data – Telstra makes metadata available to customers

Australia’s Telstra responds to government data legislation by opening metadata to users

Today Australian incumbent telco announced a scheme to give customers access to their personal metadata being stored by the company.

In a post on the company’s Telstra Exchange blog the company’s Chief Risk Officer, Kate Hughes described how the service will work with a standard enquiry being free through the web portal with more complex queries attracting of fee of $25 or more.

The program is a response to the Australian Parliament’s controversial intention to introduce a mandatory data retention regime which will force telcos and ISPs to retain a record of customer’s connection information.

We believe that if the police can ask for information relating to you, you should be able to as well.

At present the scheme is quite labor intensive, a request for information involves a great deal of manual processing under the company’s current systems however Hughes is optimistic they will be able to deal with the workload.

“We haven’t yet built the system that will enable us to quickly get that data,” Hughes told this website in an interview after the announcement. “If you came to us today and asked for that dataset it wouldn’t be a simple request.”

The metadata opportunity

In some respects the metadata proposal is an opportunity for the company to comply with the requirement of the Australian Privacy Principles that were introduced last year where companies are obliged to disclose to their customers any personally identifiable information they hold.

For large organisations like Telstra this presents a problem as it’s difficult to know exactly what information every arm of the business has been collecting. Putting the data into a centralised web portal makes it easier to manage the requirements of various acts.

That Telstra is struggling with this task illustrates the problems the data retention proposals present to smaller companies with far fewer resources to gather, store and manage the information.

Unclear requirements

Another problem facing Hughes, Telstra and the entire Australian communications industry is no-one is quite clear exactly what data will be required under the act, the legislation proposed the minister can declare what information should be retained while the industry believes this should be hard coded into the act which will make it harder for governments to expand their powers.

What is clear is that regardless of what’s passed into law, technology is going to stay ahead of the legislators, “I do think though this will be very much a ‘point in time’ debate,” Hughes said. “Metadata will evolve more quickly than this legislation can probably keep pace with so I think we will find ourselves back here in two years.”

In many ways Australia’s metadata proposals illustrates the problems facing governments and businesses in managing data during an era where its growing exponentially, it may well turn out for telcos, consumers and government agencies that ultimately less is more.

Reducing big data risks by collecting less

Just because you can collect data doesn’t mean you should

“To my knowledge we have had no data breaches,” stated Tim Morris at the Tech Leaders conference in the Blue Mountains west of Sydney on Sunday.

Morris, the Australian Federal Police force’s Assistant Commissioner for High Tech Crime Operations, was explaining the controversial data retention bill currently before the nation’s Parliament which will require telecommunications companies to keep customers’  connection details – considered to be ‘metadata’ – for two years.

The bill is fiercely opposed by Australia’s tech community, including this writer, as it’s an expensive   and unnecessary invasion of privacy that will do little to protect the community but expose ordinary citizens to a wide range of risks.

One of those risks is that of the data stores being hacked, a threat that Morris downplayed with some qualifications.

As we’re seeing in the Snowden revelations, there are few organisations that are secure against determined criminals and the Australian Federal Police are no exception.

For all organisations, not just government agencies, the question about data should be ‘do we need this?’

In a time of ‘Big Data’ where it’s possible to collect and store massive amounts of information, it’s tempting to become a data hoarder which exposes managers to various risks, not the least that of it being stolen my hackers. It may well be that reducing those risks simply means collecting less data.

Certainly in Australia, the data retention act will only create more headaches and risks while doing little to help public safety agencies to do their job. Just because you can collect data doesn’t mean you should.

At the mercy of machines

Automation and algorithms are changing business but they are not without risks

Automation is the greatest change we’re going to see in business over the next decade as companies increasingly rely upon computers to make day to day decisions.

Giving control to algorithms however comes with a set of risks which managers and business owners have to prepare for.

Earlier this week the risks in relying on algorithms were shown when car service Uber’s management was slow to react to a situation where its formulas risked a PR disaster.

Uber’s misstep in Sydney shows the weaknesses in the automated business model as its algorithm detected people clamouring for rides out of the city and applied ‘surge pricing’.

Surge pricing is applied when Uber’s system sees high demand – typically around events like New Year’s Eve – although the company has previously been criticised for alleged profiteering during emergencies like Hurricane Sandy in New York.

In the light of previous criticism, it’s surprising that Uber stumbled in Sydney during the hostage crisis. Shortly after criticism of the surge pricing arose on the internet, the company’s Sydney social media manager sent out a standard defence of surge pricing.

That message was consistent with both Uber’s business model and how the algorithm that determines the company’s fares works; however it was a potential disaster for the business’ already battered reputation.

An hour later the company’s management had realised their mistake and announced that rides out of Sydney’s Central Business District would be free.

User’s mistake is a classic example of the dangers of relying solely on an algorithm to determine business decisions; while things will work fine during the normal course of business, there will always be edge cases that create perverse results.

While machines are efficient; they lack context, judgement and compassion which exposes those who rely solely upon them to unforeseen risks.

As the Internet of Things rolls out, systems will be deployed where responses will be based upon the rules of predetermined formulas.

Businesses with overly strict rules and no provision for management intervention in extreme circumstances will find themselves, like Uber, at the mercy of their machines. Staking everything on those machines could turn out to be the riskiest strategy of all.

SurveyMonkey builds its war chest

SurveyMonkey raises another $250 million to fund future expansion

Earlier this year Decoding The New Economy interviewed SurveyMonkey’s  CEO Dave Goldberg on his vision for the business and how the company’s services are helping people understand the context of the data pouring into their organisations.

Yesterday SurveyMonkey announced it had raised 250 million dollars through an equity round that values the business at $1.3 billion, an amount only a little more than what the company has raised since being founded in 1999.

The additional funds are earmarked for privately held SurveyMonkey to acquire more companies and “provide meaningful liquidity to our employees and investors” with participants in the new funding round including CEO Goldberg and Google Ventures increasing their existing stakes.

In his interview with Decoding The New Economy last February, Goldberg described how he sees mobile technologies changing both SurveyMonkey and business in general along with the challenge for companies in understanding the data pouring into business.

It’s not hard to image many of the acquisitions SurveyMonkey makes with its latest fundraising will be in the mobile and analytics sectors.

A question of ethics

Uber’s missteps remind us that ethics matter in business

At this week’s Australian Gartner Symposium ethics was one of the key issues flagged for CIOs and IT workers; as technology becomes more pervasive and instrusive, managers are going to have to deal with a myriad of questions about what is the moral course of action.

So far the news isn’t good for the tech industry with many businesses failing to deal with the masses of data they are accumulating on users, suppliers and competitors.

A failure of transparency

One case in point is that of online ride service, Uber. One of Uber’s supposed strengths is its accountability and transparancy; the service can track passengers and drivers through their journey which should, in theory, make the trip safer for everybody.

In reality the tracking doesn’t do a great job of protecting riders and drivers, mainly because Uber has Silicon Valley’s Soviet attitude to customer service. That tracking also creates an ethical issue for the company’s management and one that isn’t being dealt with well.

Compounding Uber’s ethical problem is the attitude of its managers, when a Senior Vice President suggests smearing a journalist who writes critical stories then its clear the company has a problem and the question for users has to be ‘can we trust these people with our personal data?’

With Uber we may be seeing the first company where data management and misuse results in senior management, and possibly the founder, falling on their sword.

Journalists’ ethics

Another aspect of the latest Uber story is the question of journalistic ethics; indeed the apologists for Uber counter that because some journalists are corrupt that justifies underhand tactics from companies subject to critical articles.

That argument is deeply flawed with little merit and tells us more about the people making it than any journalist’s ethical compass, however there is a discussion to be had about the behaviour of many reporters.

As someone who regularly receives corporate largess — I attended the Gartner Symposium as a guest of BlackBerry and will be going to an Acer event tomorrow night — this is something I regularly grapple with; my answer (or rationalisation) is that I disclose that largess and let the reader make up their own mind.

However one thing is clear at these events; everything is on the record unless explicitly stated by the other party. This makes Michael Wolff’s criticism of Ben Smith’s original Uber story in Buzz Feed pretty hollow and gives us many pointers on Wolff’s own moral compass as he invites other writers to ‘privileged’ dinners where the default attitude is that everything is off the record.

Playing an insider game

Ultimately we’re seeing an insider game being played, where journalists like Wolff put their own egos above their job of telling their audience what is happening; Jay Rosen highlighted this problem with political coverage but in many respects it’s worse in tech, business and startup journalism.

It’s not surprising when a game is being played by insiders that they take offense at outsiders criticizing them.

Once the customers become outsiders though, the game is drawing to an end. That’s the fate Uber, and much of the tech industry, desperately want to avoid.

Uber in particular has many powerful enemies around the world and clumsy management mis-steps only play into the hands of those who see the company as a threat to their cosy cartels. It would be a shame if Uber’s disruption of the many dysfunctional taxi markets was derailed due to the company’s paranoia and arrogance.

Eventually ethics matter. It’s something that both the insular tech industry and those who write on it should remind themselves.

Democratising the internet of things

A primary school science project shows how communities can start using open data to monitor their neighbourhood’s environment

Last year Alicia Asin of Spanish sensor vendor Libelium spoke to this site about her vision of the internet of things improving transparency in society and government.

A good example of this democratisation of data was at the New South Wales Pearcey Awards last week where the state’s winners of the Young ICT Explorers competition were profiled.

Coming in equal first were a group of students from Neutral Bay’s state primary school with their Bin I.T project that monitors garbage levels in rubbish bins.

The kids built their project on an Arduino microcontroller that connects to a Google spreadsheet which displays the status of the bin in the school’s classrooms. For $80 they’ve created a small version of what the City of Barcelona is spending millions of Euro on.

With the accessibility of cheap sensors and cloud computing its possible for students, community groups and activists to take the monitoring of their environment into their own hands; no longer do people have to rely on government agencies or private companies to release information when they can collect it themselves.

Probably the best example of activists taking action themselves is the Safecast project which was born out of community suspicion of official radiation data following the Fukushima.

We can expect to see more communities following the Safecast model as concerns about the effects of mining, industrial and fracking operations on neighbourhoods grow.

The Bin I.T project and the kids of Neutral Bay Public School could be showing us where communities will be taking data into their own hands in the near future.

Salesforce faces the end of the database era

Cloud CRM giant Salesforce faces a challenge as searching unstructured data and analytics companies like Splunk change the business model.

Last week we looked at the way we organise information is changing in the face of exploding data volumes.

One of the consequences of the data explosion is that structured databases are beginning to struggle as information sources and business needs are becoming more diverse.

Yesterday, cloud Customer Relationship Management company Salesforce announced their Wave analytics product which the company says “with its schema-free architecture, data no longer has to be pre-sorted or organized in some narrowly defined manner before it can be analyzed.”

The end of the database era

Salesforce’s move is interesting for a company whose success has been based upon structured databases to run its CRM and other services.

What the company’s move could be interpreted that the age of the database is over; that organising data is a fool’s errand as it becomes harder to sort and categorise the information pouring into businesses.

This was the theme at the previous week’s Splunk conference in Las Vegas where the company’s CTO, Todd Papaioannou, told Decoding The New Economy how the world is moving away from structured databases.

“We’re going through a sea change in the analytics space,” Papaioannou said. “What characterised the last thirty years was what I call the ‘schema write’ era; big databases that have a schema where you have to load the data into that schema then transform before you can ask questions of it.”

Breaking the structure

The key with programs like Salesforce and other database driven products like SAP and Oracle is that both the data structures — the schema — and the questions are largely pre-configured. With the unstructured model it’s Google-like queries on the stored data that matters.

For companies like Salesforce this means a fundamental change to their underlying product and possibly their business models as well.

It may well be that Salesforce, a company that defined itself by the ‘No Software’ slogan is now being challenged by the No Database era.

Paul travelled to San Francisco and Las Vegas as a guest of Salesforce and Splunk respectively

You’re being scanned

Recognition technology is advancing rapidly, creating opportunities for marketers and privacy concerns for consumers.

A  cute little story appeared on the BBC website today about the Teatreneu club, a comedy venue in Barcelona using facial recognition technology to charge for laughs.

In a related story, the Wall Street Journal reports on how marketers are scanning online pictures to identify the people engaging with their brands and the context they’re being used.

With the advances in recognition technology and deeper, faster analytics it’s now becoming feasible that anything you do that’s posted online or being monitored by things like CCTV is now quite possibly recognise you, the products your using and the place you’re using them in.

Throw all of the data gathered by these technologies into the stew of information that marketers, companies and governments are already collecting and there a myriad of  good and bad applications which could be used.

What both stories show is that technology is moving fast, certainly faster than regulatory agencies and the bulk of the public realise. This is going to present challenges in the near future, not least with privacy issues.

For the Teatreneu club, the experiment should be interesting given rich people tend to laugh less; they may find the folk who laugh the most are the people least able to pay 3o Euro cents a giggle.

Adrift in the data lake

We’re awash with data and businesses have to figure out how not to drown in it

Last week Yahoo! closed down their directory pages ending one of the defining services of the 1990s internet and showing how the internet has changed since the first dot com boom.

The Yahoo! Directory was victim of a fundamental change in how we manage data as Google showed it wasn’t necessary to tag and label every piece of information before it could be used.

Yahoo!’s Directory was a classic case of applying old methods to new technologies – in this case carrying out a librarian’s function of cataloguing and categorising every web page.

One problem with that way of saving information is you need to know part of the answer before you can start searching; you need to have some idea of what category your query comes under or the name of the business or person you’re looking for.

That pan was exploited by the Yellow Pages where licensees around the world harvested a healthy cash flow from businesses forced to list under a dozen different categories to make sure prospective customers found them.

With the arrival of Google that way of structuring information came to an end as Sergey Brin and Larry Page’s smart algorithm showed it wasn’t necessary to pigeonhole information into highly structured databases.

Unstructured data

Rather than being structured, data is now becoming ‘unstructured’ and instead of employing an army of clerks to categorise information it’s now the job of computers to analyse that raw information and pick out what we need for our businesses and lives.

As information pours into companies from increasingly diverse sources, a flood that’s becoming so great it’s being referred to as the ‘data lake’, it’s become clear the battle to structure data is lost.

At the Splunk Conference in Las Vegas this week, the term ‘data lake’ is being used a lot as the company explains its technology for analysing business information.

Splunk, along with services like IBM’s Watson and Tableau Software, is one the companies capitalising on businesses’ need to manage unstructured data by giving customers the tools to analyse their information without having first to shoehorn it into a database.

“Thanks to Google we got to look at data a different way,” says Splunk’s CEO and Chairman Godfrey Sullivan. “You don’t have to know the question before you start the search.”

Diving into the data lake

It’s always dangerous applying simple labels to computing technologies but some terms, like ‘Cloud Computing’, don’t do a bad job of describing the principles involved and so it is with the ‘data lake’.

Rather than a nice, orderly world where everything can be pigeonholed, we know have a fluid environment where it wouldn’t be possible to label everything even if we wanted to. A lake is a good description of the mass of data pouring into our lives.

The web was an early example of having to manage that data lake and Google showed how it could be done. Now it’s the turn of other companies to apply the principles to business.

Google fatally damaged both Yahoo! and the Yellow Pages, other companies that are stuck in the age of structured data are going to find the future equally dismal. Don’t drown in that data lake.

Paul travelled to Las Vegas as a guest of Splunk