How That “Chatbot” of Yours Might Be Exposing Data to Unauthorized People
The words “artificial intelligence” are thrown around a lot these days. However, there’s a growing number of supposed AI startups that actually have very little to do with the technology. Research conducted by MMC Ventures couldn’t find any evidence of artificial intelligence in 40 percent of European so called “AI” startups. And the vast majority —9 out of 10— are in the business market.
Moreover, even companies that are developing AI solutions often supplement it with remote workers. Workers that jump in whenever the machine fails to complete a task by itself.
There’s nothing wrong with mixing human and artificial intelligence to provide a superior product, in fact to a large degree that’s the future of work. The problem is when companies are not honest about it. As revealed by The Guardian in a recent investigative article, many don’t even bother to vet the people that will be looking at your data, instead they rely on contracting outsourced gig workers from third-party services like Amazon’s Mechanical Turk.
Here’s how it works: When their AI programs fail to understand or solve a problem, that task is sent to a third-party service. A service in which an army of contractors is competing to solve it as quickly as possible, for a few cents each. No vetting. Anyone anywhere can join in just a few minutes, which is great for some tasks and unacceptable for others.
Not all information is created equal. Enterprises need control over who can see what. The lack of transparency limits your ability to protect the privacy of your customers and employees, and often results in sensitive information being exposed to third-parties without your explicit knowledge or consent.
This is a big deal for HR. Resumes contain personal identifiable information, therefore their handling falls under the umbrella of GDPR and other data privacy regulations around the globe. How can you guarantee that your candidates’ personal data is protected if so-called AI solutions are exposing data to strangers sitting in their living rooms?
This is not an argument against AI, it’s an argument about getting to know your vendors and making sure that you can trust them not to expose information to third-parties without your knowledge or consent.
At Avature, we pride ourselves in letting our customers control what, where and when information is exposed. Customers can grant and revoke access at will to third parties, as well as defining what they can see and for how long with incredible granularity. This enables them to comply with any privacy regulations, present and future, by allowing them to adapt how their systems handles data on-the-fly.
Our approach to AI is also different from the black-box approach that is ubiquitous in the industry. We favor transparency, which means we need to develop our AI tools in-house. From the get-go we design them to give our customers all the information they need to understand what the system is doing. Imagine a recruiter searches for a specific position, Avature Semantic Search extends their query with terms that it knows are related and every single one is displayed and can be turned on and off by the user.
GDPR, security and data privacy are a growing factor for the industry, as such its’ increasingly important to look beyond just the features, and carefully assess who you’re purchasing technology from. You need to understand your vendor’s core values, their development principles and strategic roadmap.