I recently read a white paper about Cyber Security in AI (Artificial intelligence) and it was written by Maurice Dawkins of Chicago Institute of Technology & Annemarie Szakonyi of St Louis University. The gist of the white paper was to call on awareness of the importance of cybersecurity and education in applications that utilize artificial intelligence. The writers emphasized how compromised AI tools and systems can easily be catastrophic. To add to the dreadfulness, imagine AI going wrong. An AI algorithm activating inaccurate results because its intelligence is underdeveloped or too broad or compromised by associated systems is a security risk more and more AI professionals are trying to narrow in on and prevent.
The brains behind AI is smart learning as opposed to machine learning whose focus is on adapting through experiences. So basically AI derives from a set amount of conditions and displays a result when those conditions are met. This is where the complicated aspect of prediction comes in.
My most recent years have been in health technology. We use artificial intelligence in all kinds of ways in the applications we build. From our custom built alert systems to our reporting and regenerative services that are primarily focused on oncology practices all over the world. I've seen a need for it in health technology for years, however, my biggest concern with AI in this industry is how fast it's being adopted without fault proofing.
My concern is valid and I've recently begin to see more technologists stress their concern around standards. Imagine a healthcare system processing covid-19 results to individuals showing covid symptoms after a test but reporting falsely on individuals who do not or whose results are actually invalid due to taking the test incorrectly, all because the algorithm behind the AI set expectations with few conditions that deliver an accurate result or none at all. Though it sounds far-fetched, this is exactly what just happened with start-up Curative's covid-19 test. They produce a covid test that allows patients to swab the inside of their own mouths and drop it into a drop box to be sent to a lab for results. In nearly all cases, the swab is being done on-site in front of a healthcare aid. The FDA published a notice that Curative's tests may produce a false result in the event it is used incorrectly. A direct result of limited conditions set in its AI test results design.
Knowing all these potential pitfalls prompted me to dig deeper into security, self healing, fault proofing and other important instances not looked at from the surface in AI. It's a black hole that many researchers have been tasked to look deep into.
A lot of AI programs do not take into account a lot of critical and important specifics needed to predict an outcome. This is why adaptation of standards are being pushed and the argument is that standardized software development practices should pose as the foundation to AI. The biggest question is who really regulates development practices? Devices have taken on this responsibility and even they reak of vulnerabilities.
In general there are just so many oddities or caveats that exist now with AI. Then there's the usage and resources such as data storage and consumption which I am not sure to what degree ethics actually play in the mix. Running complex systems where AI is the core of the process but using a lot of resources also is not something that is regulated. Believe it or not, maximized energy consumption is normal when AI and ML are combined. At the moment, reducing energy consumption and the carbon footprint is highly associated with performance friendly algorithms and equipment proven to require less resources. There are several researchers working to make this issue the core of adapting AI.
Where to go from here? Continued research on how to make Artificial Intelligence more intelligent and fault proof should be the aim.
Comments / 0