The National Telecommunications and Information Administration (NTIA), a branch of the federal Department of Commerce responsible for advising the President on matters concerning telecommunications and information policy, recently released a call for feedback (RFC) to gather insights from various AI stakeholders regarding both regulatory and self-regulatory frameworks. These frameworks are intended to facilitate the creation of AI audits, assessments, certifications, and other approaches that can furnish credible evidence pertaining to the safety, reliability, and efficiency of AI systems.
Re: NTIA Docket No. 230407-0093 – Request for Comment on AI Accountability Policy
Dear National Telecommunications and Information Administration (NTIA),
As a leading provider of AI-based healthcare solutions, Diagnostic Robotics understands the significance of ensuring that AI systems are effective, ethical, safe, and trustworthy. The company has been utilizing AI for several years now and understands that, with appropriate best practices and measures, it can complement many industries, including healthcare.
However, the development and deployment of AI systems have brought numerous concerns regarding their performance, reliability, and ethical implications. Among these concerns, two critical issues that require significant attention are bias and identifying fake versus real data.
Bias in AI systems, especially in healthcare, is a major pitfall that we need to address. While AI holds immense promise in making healthcare more affordable, effective, and accessible, there is a risk that algorithms may learn from and perpetuate any bias already present in the data they rely on. For example, suppose historical data shows that certain population groups have received lower levels of care. In that case, an algorithm trained on this data may assign those groups a lower risk score for the same condition, perpetuating an inequitable system.
To address bias effectively, companies developing AI-based products must be aware of the different types of biases that can occur in AI systems:
- Data bias: This occurs when the training data used to develop the AI model is not representative of the population it is intended to serve.
- Algorithmic bias: This occurs when the algorithms used in the AI model are biased, even if the training data is unbiased.
- Representation bias: This occurs when the AI model fails to represent certain groups or individuals, such as marginalized or underrepresented populations.
- User bias: This occurs when users of the AI model use it in a biased way or interpret the results in a biased way.
- Evaluation bias: This occurs when the evaluation of the AI model is biased, such as using inappropriate metrics or biased test data.
It's critical that any company developing AI-based products is aware of these biases and takes steps to mitigate them when developing and using AI models. We need to ensure that quality tests are executed throughout the full cycle of the AI system, during and after training to help reduce the impact of bias.
During the training phase, several steps can be taken:
- Algorithmic fairness: Incorporating algorithmic fairness techniques, such as precision and recall parity and demographic parity, ensures fair and unbiased predictions across different demographic groups.
- Model transparency: Designing transparent and interpretable models helps identify and understand the sources of bias, enabling effective mitigation strategies.
- Input control and expert validation: Involving domain experts in the algorithm development and deployment process helps review, modify, and validate inputs and outputs.
- Relying on medical and clinical protocol: Using established medical and clinical protocols as a baseline helps reduce the risk of bias in healthcare AI systems.
- Diverse and representative data: Using diverse and representative data during training helps reduce data bias. Access to large, diverse datasets is crucial for building robust and equitable AI systems.
- Data augmentation: Generating additional examples from existing data through data augmentation techniques increases the diversity of training data.
Even after AI solutions have been tested against bias and launched, the continuous learning nature of machine learning means that ongoing monitoring is required. An unbiased algorithm placed into an environment that contains an element of bias can learn to be biased over time. For example, if a specific race of patients at a hospital is more likely to receive an MRI test for the same condition, the algorithm may learn over time to bias the likelihood of a test in favor of the race of the patient.
To mitigate this, we can take several post-training measures such as:
- Data monitoring: Continuously monitor data inputs and outputs to identify any patterns or trends indicating bias.
- Fairness testing: Conduct fairness testing to assess whether AI systems produce equitable outcomes across different groups.
- Regulatory requirements: Regulators should require companies to conduct these tests, monitor algorithms, store the results, and be prepared for audits to ensure transparency, accountability, and fairness.
Another major pitfall is the difficulty in accurately identifying fake versus real data. This challenge is particularly relevant in critical domains like cybersecurity, finance, and healthcare, where the authenticity and quality of data are crucial. AI models heavily rely on the accuracy and quality of training data, and if it uses fake or manipulated data, the model's performance can be compromised, leading to significant consequences.
To address this challenge, organizations should explore a range of approaches:
- Multiple data sources: Utilize a multitude of data sources to cross-validate and verify the authenticity of the information.
- Detection-focused AI models: Develop AI models specifically designed to detect fake data, leveraging techniques such as anomaly detection or generative adversarial networks.
- Increased transparency: Enhance the transparency of AI models to allow users to understand how decisions are made and enable them to identify discrepancies or errors.
- Human expertise: Leverage human expertise and judgment to enhance the accuracy of data analysis, especially when dealing with complex and nuanced situations.
In conclusion, by implementing proactive measures and adopting ethics guidelines for trustworthy AI, we can ensure the responsible development and deployment of AI technologies that benefit society. We are confident that as more AI solutions are created, the technology will become even more sophisticated and easier to understand. The same theory applies to data – the more we collect, the smarter these AI solutions will become.
Sincerely,
Diagnostic Robotics