Wednesday, 3 May 2017

Project Evaluation

As my project is finishing this week I've decided to go back over all the work I have done on the project to evaluate myself looking at what I have done and what I would have done differently knowing how the project turned out. I think this is an important to do to help me improve in the future.


What did I want to achieve?

When starting this project I wanted to create an artificial character inside of the Unity3D game engine using the JavaScript programming language. The AI was meant to be able to react to see the player if you moved into its line of site as well as hearing footsteps made by the player. Once detected the AI would begin to path towards the player. The code should be kept neatly with key elements commented and be efficient where possible so that anyone viewing the code would be able to know what is going on.



What changed with my project?

Over the course of my project not too much had changed from what I originally planned to do. The biggest things that I changed on my project was that instead of using JavaScript to program the AI I changed to using C#. I did this after having a meeting with Chris and we spoke about how my project will look from an employers perspective. He mentioned that a lot of employers would rather see my ability to program in C# and that it would greatly benefit my project to change languages. The only other things that I did change was that I added additional work to my project as I was ahead of my milestones. I decided to create the attacking mechanics for the AI and creating my own pathfinding algorithm. I created my own pathfinding algorithm after having a meeting with Chris and we discussed how showing that I have an understanding of algorithms like A* will have a positive effect on my portfolio. The attacking mechanics that I planned to add changed how they worked quite a bit throughout the project as porting my project to C# and creating my algorithm took up the time that I initially planned on working on the attacking mechanics.



What went well?

The main thing that I think we well on my project was that I worked consistently on it every week (Except from the 2 weeks over Christmas break). Putting this much time into the project meant that not only did I achieve everything that I originally planned to do but I was also able to create new features that I had not planned to do. I think that if I had done less work on my project it would have had a big impact on the quality of my work as well. Another thing that I liked about my project was that it gave me a good opportunity to show my programming ability as well as learn new techniques which I will be able to use on different projects that I work on after University.



What didn't go well?

I think the biggest problem with my project was that I had under-scoped what I was going to do. This was because when I originally created my project proposal I had never created an AI character like this for a game. As I was also using a lot of new skills and techniques I didn't think I would learn them as quickly as I did and surprised myself with what I could do. If I was to start my project again I would defiantly push myself more at the start to learn even more advanced tools (For example using a more advanced state machine) as I think that if I put the time into learning these harder tools I would be able to.


Summary
Overall I think my project went really well and I am happy with the final product that I have created. I will be looking to take this AI character that I have created and will be using it in projects that I plan on doing once I finish University. I think the final product shows my ability to program as well as working in an efficient and tidy manager which is something that I had originally set out to do.


Final bug fixing

For this weeks work I went though the feedback that I received from my play testing last week and tried to improve and fix as many of the problems that was mentioned by my play testers.

The fixes that I made were:
  • The AI can now wait before transitioning between waypoints.
  • Player no longer colliders with the AIs ragdoll body.
  • Recreated the test scenes and added a menu. This allows for easier testing of the AI.
  • Added controls to the menu.
  • Player is now able to crouch when pushing left control. This makes no noise and allows them to sneak up very close to the AI.
  • Hearing is now less sensitive so you need to be closer to alert the AI.
  • AI raycast for sight has been adjusted. Previously it aimed down slightly. This meant the AI was unable to see the player when it should be in sight.
  • AI loses alert status quicker.
  • Fixed another bug with the AI sight. AIs position variable wasn't setting correctly causing the raycast to sometimes shoot from several meters below the AI.
  • Changed AI viewcone sizes. (Previously was a bit too small).

All of these adjustments have been made to further polish the AIs mechanics and to create a better experience when playing in a scene with it. Something that I have tried to keep in mind this week was something a play tester said about how would to AI feel if it was to be used in an actual game. This is something I originally mentioned when I started this project was that I would like to be to use the AI in a game if I wanted.

After doing all of these adjustments I have gone back over my code making sure that it is as efficient as I can make it, my code has been commented and is easy to read. This was one of the main reasons that I decided to undertake AI as my project as I wanted to show to a potential employer that I can work in a neat and efficient manor.

Something that I have tried to fix but was unable to get working was having the AI rotate faster. The AI currently can be led round in a circle with it unable to turn as quick as the player. I tired increasing the speed on the navMeshAgent component however this did not fix it. I have looked through the code as well but was unable to find anything that is effecting this.

My next blog post will most likely be my final blog post on this project. I will be evaluating my project looking at what I have created, what I think went well, what I think could have been improved and discussing the changes that I made throughout the course of my project. I think that it is import for me to evaluate myself like this to help find areas that I can improve on when working on other projects.


I have uploaded the last changes to my GitHub page which can be found here: https://github.com/ABurton96/GameAI

Sunday, 30 April 2017

Playtesting

For this weeks work I looked at getting external feedback on my project to help me find any problems and improvements that I can make before the end of my project. I got these play testers to try out my the test scene that I created last week.

From my play testers the feedback I received was:

  • AI reacted too quickly when they see me. Short delay would be better.
  • You collide with the ragdoll body when its on the floor.
  • Smaller test scenes would be nicer to show off how the AI works. e.g. One AI facing away showing how the hearing works.
  • AI turns too slowly.
  • A* algorithm works well. When running at an angle you can see the AI stutter when moving between the centre of nodes. 
  • The AI is very responsive and can be hard to sneak by at times. 
  • How would the AI work in a complete game? AI is very difficult to lose if it finds you.
  • No ability to crouch with player. As you always make noise you can't get behind the AI.
  • No way to add a delay when transitioning between waypoints.
  • Sight doesn't take into count scene lighting.

After reading all this feedback I will spend my final week working on fixing and improving some of the things mentioned above. I will be changing the AIs turning speed, creating different test scenes, removing collision with AI ragdoll, adding ability to delay when moving between waypoints and will also look into trying to make the AI react a little slower than he currently does.

After getting play testing I think this has been important to improving my AI and making the AI as good as I can. This is something that I wish I was able to do earlier as it wold have given me more time to improve more mechanics and would have allowed for me to gain even more feedback.

Sunday, 23 April 2017

Polish and preparing for play testing

For this weeks work I looked at polishing some of the AI's mechanics and building a scene so that I can gain feedback on the AI from play testers.

I started by changing how the ragdoll system works when the AI is killed. Previously the AI had all the ragdoll elements on the different body parts while also having all the other AI scripts and animations on the same game objects. This system was causing a few problems where when the AI died he would be thrown backwards and up into the sky. I changed this so that the ragdoll elements were on a different game object and are instantiated when the AI runs out of health. This system looks and works much better than the previous system while not causing any problems performance wise.

Next I added a small health bar to the top of the AIs head so you are able to tell how much health it has. After adding this I noticed that some attacks were unintentionally doing extra damage. After a little bit of bug testing I realised that this was being caused by the different colliders that were on the AI's different game objects that were previously being used by the ragdoll. Removing these fixed this problem.

After this I then added the ability to "back stab" the AI. This means that stabbing the AI in the back will instantly kill them whereas hitting them in the front will only damage them. I also added a very basic death screen to the player. I have done this to help when I give the project out to play testers so they could restart the scene if they die.

Finally I built a scene that I am going to be sending out to play testers next week. I have tried to create a scene that shows off all the different things the AI can do while also giving the player a bit of choice as to how they want to go about killing/moving past the AI characters.

From now on I will no longer be working on my A* algorithm and will just be using Unity's navMesh. This is because my algorithm has a lot less functionality than Unity's and it will take a lot of time for my algorithm to get to the standard of Unity's. I am happy that I have put the time I have into creating the algorithm as I feel that I have learned a lot from it.

Next week I will be collecting all the information from the play testers and seeing what problems and improvements are suggested. As I only have a couple of weeks left on the project I will be looking at what small problems I can fix and improving the AI where I can before handing my project in.

Sunday, 16 April 2017

Re-adding attacking mechanics and small bug fixes

For this weeks work I looked at re-doing the attacking system for the AI and I also had a little look at changing the way nodes are handled to make my pathfinding algorithm more efficient.

I started by looking at changing the nodes so that the script only generates one set of nodes per scene compared to a set of nodes per AI which is what currently happens. After spending around an hour working on this I realised that this was going to take quite a bit of work to change how things are handled and that it was not worth my time to continue changing it. This was because I feel that it was more important to finish other key features as I only have a few weeks left of the project.

After working on the algorithm I then went onto working on the AIs attack. I started by creating a basic attacking animation using a sword model that I found online (mmarti, 2002) and created attacking animations for the AI and player. I added a basic combat and health system so that both could take damage. Something that I realised after testing was that nothing happened when either the AI or player ran out of health. For the AI I have set the the AI so that it enables a ragdoll effect. By using a ragdoll character instead of creating a death animation would allow me to add specific forces to each part of the body when attacked. This gives the effect that the AI has actually taken damage in the specific parts of the body. I have yet to add anything for the player dying however I will likely just have the scene restart as my main focus is on the AI.

After I had done the attacking mechanics and was testing them in the scene I noticed a strange bug where the AI would only attack once and would just stand in front of the player. After a few seconds the AI would then begin to run in circles around the player and then after several more seconds it would go back to its patrolling state. I found that this was being caused by how I was calling the AIs attacking function and has now been fixed.

Next week I will be adding the final touches to the AIs attacking mechanics and doing some further play testing. I will be trying the AI in several more advanced scenes and will be putting together a scene so that I can get external play-testers to try the AI out and gain feedback from them. This should give me time to make any final adjustments to the AI ready for the end of my project.


References
mmarti (2002) Sword Model. Available at: https://www.turbosquid.com/3d-models/medieval-sword-3d-model/168366. Accessed on: 12-04-2017.

Sunday, 9 April 2017

Testing AI

For this weeks work I looked at testing my AI in more advanced scenes to see how it managed with pathing and reacting to the player in these more complex scenes. I started by creating a scene with a maze along with stairs at different heights. The AI managed to path between all the objects and handle the different heights fine. Having moved the sight raycast from feet height to eyeline has greatly improved the AIs sight. I decided to remove the code that handle the AIs attack for now. This was because it was causing a few problems and was making it hard for me to test some of the other AIs features. I've changed it so that the AI will now run and stand next to the player instead of attempting to shoot him. I may look at adding the ability to attack back depending on time over the next few weeks.


After I had tested the scene I then tried adding multiple different AI into the scene to see how they work together and how it impacts performance. The AI worked well together however something that I overlooked when creating my A* algorithm is that the node grid is generated in the same script that generates the path. This means that each AI needs the script attached to it so a node grid is being generated for each AI in the scene. Although this didn't cause too many performance issues with only a handful of AI this is still something I want to change as I do want the AI to be as efficient as I can make it.

Next week I will look into changing my algorithm so that it is more efficient with multiple AI in the scene. I will also be doing additional playtesting to see if I can find any other problems with my code. Depending on how long these previous tasks take I might look at redoing the AIs attacking mechanics.

Monday, 3 April 2017

More A* Adjustments

For this weeks work I looked at making some small adjustments to my path-finding algorithm. The things I looked at changing were adding the ability for me to check how long the AI path is, fixing node generation around raised platforms, fixing raycast height for AI sight and looking into the AI running awkwardly between nodes when moving.

 I started by adding the ability to check the path distance. This allows me to use my algorithm to calculate the sound for the AI as previously I was still using Unity's navMesh to calculate this. After changing this I then fixed the bug where some nodes generate strangely around raised platforms. This was causing a few problems where the AI would try to take a path that it couldn't reach so would get stuck. Next I looked at changing the movement between nodes. I started by looking at changing how the AI rotates when moving. Doing this helped when moving where there were no obstacles around the AI however when needing to path up stairs or around and object the AI would get stuck then glitch through walls. I changed this back to the previous system and will look at fixing this again later. Finally I looked at fixing the height the AI sight raycast was being shot from. Previously it was from the AIs feet and when moving it up to eye level I either got errors or the raycast wouldn't hit the player. After a little bit of bug testing I noticed that the raycast was being shot at a slight angle upwards causing it to go over the player. This has now been fixed so the AI will see from eye level.

Next week I will be bug testing all components of the AI and will be testing the AI out in different situations to see how it can handle them. Some example of things I will be trying are adding multiple AI to the scene and more advanced scene layouts with multiple different heights and complex mazes.