Mid-Project Deliverables:
Design Review - Reflection
Feedback and Decisions
We received some interesting feedback regarding methods of evaluating chess boards, including examining patterns in blank space on the board and using a database to help the AI with the start of the game. We have decided to start by developing a simple chess AI using min-max, then we will add embellishments which use machine learning to improve the AI. Once we have an initial min-max AI working, we will probably try both of the mentioned strategies. The idea for using real chess games for reference will most likely be accomplished with neural networks referencing a large database of actual chess games. Examining blank space will probably interesting to experiment with when we are evaluating potential moves within the database.
Review Process Reflection
Our review was not as helpful as it could have been because of our approach to the review. We gave too much background and asked questions which required too much prior knowledge to get very helpful advice. We got some interesting answers to our questions, and they may help us take a different approach to chess Ais, but if we had asked different questions which were more easily approached, we probably could have received more productive feedback. We mostly stuck to our agenda; we mostly just talked more than we had planned because we didn't get as much audience participation as we had hoped for. For our next review, we will focus on a few key questions probably not related to the theory of our project and instead related to our code structure or logic issues that arise before the review.