Aadyaa Maddi

What I'm Thinking About Right Now

Problem/status quo (anecdotal?): AI is not reliable enough to be useful inside a software system.

(a software system in my mind, is dependent on two things - domain, task)

I want to integrate AI into software systems in a way that is actually useful to humans.

Two research directions I’m interested in:

  1. What should AI be used for in a system?
  2. How should AI be integrated?

Question 1: What should AI be used for in a system?

My intuition says:

  1. Providing a simplified representation of the real world, using which you can take some actions.
  2. Automating the boring/happy paths in a system.

I’m interested in situations where AI components are brought in (a) to replace software components + teams of humans, or (b) as an additional component, in a software system.

  1. Are these systems doing well?
  2. If not, what’s going wrong?

Question 2: How should AI be integrated?

AI components in the systems should have 4 properties:

  1. They have “meaningful” interfaces to accept human intent.
  2. They provide “meaningful” outputs.
  3. They are reliable/robust/fast.
  4. They have a traceback so you can inspect how an output came to be.

Here,

  1. 1/2 don’t have to be properties of the ML models themselves. They can be input/output software layers (which can have other models) wrapped around a model.
  2. 3/4 might need us to improve ML architectures/training/inference + add additional input/output software layers.
  3. What is “meaningful”? That’s probably domain-specific, and it might be a research direction in itself.

Can we come up with design patterns based on the task and domain (similar to software engineering design patterns)?

#Software-Engineering #Machine-Learning