Artificial Intelligence, or AI, has been a trending topic in computer science for some time now. Artificial intelligence is a broad subject, however, many of the concepts branch off from a handful of fundamental ideas. In spring of 2019, I created four AI demos that cover some of these topics.
All of these demos are written in Python, and can be viewed live on Microsoft Azure in Jupyter Notebooks. Each project section heading is a link to the corresponding Jupyter Notebook on Azure.
Further, while I summarize each demo here, I encourage you to view the live demos on Azure. Besides the novelty of a live demo, each demo includes a much more detailed report about the project and its results.
Lastly, some knowledge of graph theory may be helpful in understand this write up. On to the projects!
For the first project, I made three unique agent programs for a robotic vacuum cleaner (you know, like a Roomba). That is, I wrote a program that controls how the robot reacts to its environment.
The first agent was a simple reflex agent. This means the agent can only use the current sensor data to make decisions. However, because of this limited information, the robot had limited decision making skills, and would get stuck cleaning the perimeter of the room.
The second agent I designed was a random reflex agent. This was like the first agent, but with the ability to make random moves. By allowing the agent to make random turns, the robot was able to cover more ground in the room than the simple reflex agent.
Lastly, there was a reflex agent with memory. This means the agent was able to use both the current input, and a limited history of previous actions to make decisions. By carefully designing a handful of moves based on the available info, this program surpassed the random reflex agent’s efficiency.
In general, with AI, like with most programming, the more resources you have, the more options you have. But, like all programming, boundless resources aren’t available, and clever use of what is available is key.
Since every project after the first uses search algorithms, I feel I should give a quick overview of them. All search algorithms take some sort of start data, and use it to reach some sort of end state. How each goes about doing this varies, but there is some common steps.
All search algorithms start with a starting set of data. The data is manipulated by a programmer defined function, and the search function uses this manipulated data to decide its next step.
The search algorithm may continue the search with this new data as a new point of reference. Some search functions track all the steps taken as an elaborate graph, with each path representing a possible solution. Other search algorithms only track the current data state, as some problems only need the solution without the steps needed to reach it.
The search algorithm may also end the search. Some search algorithms compare the current state to an explicitly defined end state, and only stop when that is reached. Others return a solution when no other better solution is available.
With that said, back to the projects!
This was my first project using search algorithms, and I focused on one: breadth first search.
Breadth first search maintains a history of each step taken, and returns a path from the start state to the end state. Each node in this path represents a manipulation of the data needed to reach the end state, with the path itself representing a solution.
The name “breadth first” refers to how it chooses which node is best to continue searching from. It’s easiest to contrast it against “depth-first” search, because breadth produces the shortest, most efficient solution possible. Depth first searches stay on one path for a very long period of time, and produce longer solutions.
However, the efficiency of breadth first search can be costly. Maintaining the history of all possible steps for several concurrently computed paths can drain system resources.
To familiarize myself with breadth first search, I used it to solve two common AI puzzles. Wikipedia documents these well, so instead of talking about them here, I’ll redirect you to them: “Fox, Goose, and Beans” and “Missionaries and Cannibals”.
With that said, there isn’t too much else to say about this project. Since the goal was to learn more about search algorithms, the problems I tackled have well known solutions. So, on to the next project…
For this project, I compared two search algorithms: iterative deepening and A*. To test and compare these two search algorithms, I used them to solve a simple sliding tile puzzle.
First up was iterative deepening. Iterative deepening is a variant of depth-limited search, which assumes the current solution is true until it reaches a certain solution depth, then gives up. Iterative deepening re-runs a depth-limited search at greater depths until a solution is found.
The second search algorithm tested was A*. A* uses the current path length (or weight, if you assign a weight to each state) and a heuristic that measures closeness to the end state as a guide to find a solution to the problem.
Both search functions have their strengths and weaknesses. A* produces good results in a short amount of time. On the other hand, iterative deepening may not be as fast, but only requires a fraction of the system resources.
As for how these algorithms performed, A* outperformed iterative deepening. A* was able to find a solution in seconds, while iterative deepening never produced a solution. Since there are more efficient variants of A*, I would test them before falling back on iterative deepening when I need an efficient search algorithm.
The last of my projects focused on greedy search algorithms. Greedy searches don’t guarantee they will find the optimal solution. However, they require a fraction of the system resources, and work well with problems that don’t require a perfect solution.
Before I start discussing greedy search functions, I need to discuss objective functions. An objective function takes a problem state, and assigns it a value. A greedy search uses this value to choose which state it uses to continue its search.
The first algorithm tested was hill climbing. This is the archetypal greedy algorithm: the search algorithm chooses the state with the highest objective value. When there is no longer any higher state values, the search stops, whether or not the best solution has been found.
Simulated annealing expands on hill climbing by adding the choice to randomly follow a non-optimal state. A programmer defines a cooling function, which controls how often random choices are made. By including random but controlled choices, simulated annealing is less likely to get stuck on a non-optimal solution.
One thing I realized after finishing this project was I could have improved the objective function. I used some awkward math to make my objective function work with my code, but it could’ve worked better if I just added a minus sign somewhere. The report I created for the project details it better, so I encourage you to check it out.
So, these were my little AI projects. There is a lot more to AI (the first topic to come to mind is machine learning), but this was me laying the foundation. I can build off the experience these projects gave me and use it to address more complicated AI problems.
With that said, I am a well rounded person, and if you haven’t already, please check out the rest of my portfolio. Or, if you want to commission me for a programming project, here’s a contact form that will get you in contact with me.