What I'm Thinking About Right Now
Problem/status quo (anecdotal?): AI is not reliable enough to be useful inside a software system.
(a software system in my mind, is dependent on two things - domain, task)
I want to integrate AI into software systems in a way that is actually useful to humans.
Two research directions I’m interested in:
- What should AI be used for in a system?
- How should AI be integrated?
Question 1: What should AI be used for in a system?
My intuition says:
- Providing a simplified representation of the real world, using which you can take some actions.
- Automating the boring/happy paths in a system.
I’m interested in situations where AI components are brought in (a) to replace software components + teams of humans, or (b) as an additional component, in a software system.
- Are these systems doing well?
- If not, what’s going wrong?
Question 2: How should AI be integrated?
AI components in the systems should have 4 properties:
- They have “meaningful” interfaces to accept human intent.
- They provide “meaningful” outputs.
- They are reliable/robust/fast.
- They have a traceback so you can inspect how an output came to be.
Here,
- 1/2 don’t have to be properties of the ML models themselves. They can be input/output software layers (which can have other models) wrapped around a model.
- 3/4 might need us to improve ML architectures/training/inference + add additional input/output software layers.
- What is “meaningful”? That’s probably domain-specific, and it might be a research direction in itself.
Can we come up with design patterns based on the task and domain (similar to software engineering design patterns)?