In this post, I am sharing a summary of the paper AI Engineering: 11 Foundational Practices from the Software Engineering Institute at Carnegie Mellon University. These are recommendations for decision makers from experts in software engineeering, cybersecurity, and applied artificial intelligence. This set of recommendations can help organizations that are beginning to build, acquire, and integrate artificial intelligence capabilities into business and mission systems.
11 Foundational Practices
- Ensuring that you have a problem that both can and should be solved by artificial intelligence. This includes knowing the outcomes you want to achieve and knowing the data that you will need to achieve them.
- Include highly integrated subject matter experts, data scientists, and data architects in your software engineering teams.
- Take your data seriously to prevent it from consuming your project. Cleanse and protect the data from malicius injections into the data.
- Choose algorithms based on what you need your model to do, not on popularity. As your system evolves, the algorithms you use are likely to change as well.
- Secure AI systems by applying highly integrated monitoring and mitigationstrategies.
- Define checkpoints to account for the potential needs of recovery,traceability, and decision justification.
- Incorporate user experience (UX) and interaction to constantly validateand evolve models and architecture. Use an automated approach to capture human feedback on system output and improve models. Moniter UX to detect issues early such as degraded performance.
- Design for the interpretation of the inherent ambiguity in the output to assist interpreting and assuring the output.
- Implement loosely coupled solutions that can be extended or replaced to adapt to ruthless and inevitable data and model changes and algorithm innovations. When designing and sustaining AI systems, continuously apply fundamental design principles of engineering to develop loosely coupled, extensible, scalable, and secure systems.
- Commit sufficient time and expertise for constant and enduring change over the life of the system. Building AI systems requires greater resources initially that need to scale up quickly and significant dedication or resources through the life of the system. These resources include computing, hardware, storage, bandwidth, expertise, and time.
- Treat ethics as both a software design consideration and a policy concern. Evaluate every aspect of the system for potential ethical issues. How the systems will be used (e.g., autonomous military drones), data representation (e.g., ethnic, gender, disability diversity in facial recognition), and model structure (including protected characteristics in credit or employment decisions) can be ethical issues as well.