For this weeks work I began by looking back at the problem I had in the previous weeks work. This was moving the raycast up so instead of shooting out from the AI's feet it is from his eye level. I was unable to find why the errors were showing up and couldn't find a way to get it to work. I will arrange a meeting with Chris when we are back after Christmas to discuss the problem and how I can fix it.
Next I started to implement the hearing for the AI. I began by looking at the navMesh documentation (Unity, 2017) as I've never used it before but I am using it to calculate the distance the sound was travelling. This is a system that I saw Unity use during their 'Stealth game tutorial' series (Unity, 2013). Once I saw this I felt that this would be the best system to use as it helps with making sure the AI can't hear the player through walls. Learning how to use it now is beneficial as I will also be using it for the AI navigation as well. I managed to get it working so that it would calculate the distance the sound travelled however I was having a strange problem where the distance would constantly increase however neither the AI or player character were moving. I spent several hours worth of work trying to figure out why this was happening as both transforms were not changing and no code was changing any of the variables. I eventually found that in the code the AI's Y value was changing as if he was falling through the world but his transform in the inspector was not changing. I narrowed the problem down to the navMesh agent on the AI and fixed the problem by turning off the auto stop check box.
As I am now going into the Christmas break I am unsure on when I will be able to get work done so will blog whenever I make any progress (Until I got back to University where regular weekly blogs will restart). This will not effect my milestones as I previously planned for this. Ideally I will have the hearing set the players alert status (Similar to the sight does) done by the time I go back to University.
References
Unity (2017) navMesh documentation. Available at: https://docs.unity3d.com/ScriptReference/AI.NavMesh.html.
Unity (2013) Stealth game tutorial. Available at: https://www.youtube.com/watch?v=mBGUY7EUxXQ.
Sunday, 18 December 2016
Saturday, 10 December 2016
More Sight
For this week I continued developing the sight for the AI. I started by looking at what I needed to add for the sight to be completed. The main thing that needed implementing was that the AI should become alert if the player is suspicious for an extended amount of time. It should also stay alert for a short period of time after the player leaves the viewcones. This is to simulate the AI remembering the player so continues to look for him even though he can't be seen.
One problem that I have had while developing this was trying to move the raycast so that it shoots from the AIs eye level instead of from his feet (As it currently does). I have tried adding transform.up and Vector3.up to the enemies position vector but instead of working how it does at feet level I get an error when the player is inside of the viewcone. I have tried several things to see if I could find why this was happening however I am still unsure why this is happening. I will continue to look into this next week and if I can't figure it out I will arrange a meeting with Chris to help me fix this issue.
After this week I have finished implementing the AIs sight however for my project milestones I had assigned myself an additional week to complete this. I will be bringing all of my milestones forward by one week because of this. If I continue to be ahead of my previous milestones after implementing the AIs hearing I will look at adding additional features to implement in the spare weeks that I will have.
One problem that I have had while developing this was trying to move the raycast so that it shoots from the AIs eye level instead of from his feet (As it currently does). I have tried adding transform.up and Vector3.up to the enemies position vector but instead of working how it does at feet level I get an error when the player is inside of the viewcone. I have tried several things to see if I could find why this was happening however I am still unsure why this is happening. I will continue to look into this next week and if I can't figure it out I will arrange a meeting with Chris to help me fix this issue.
After this week I have finished implementing the AIs sight however for my project milestones I had assigned myself an additional week to complete this. I will be bringing all of my milestones forward by one week because of this. If I continue to be ahead of my previous milestones after implementing the AIs hearing I will look at adding additional features to implement in the spare weeks that I will have.
Saturday, 3 December 2016
Sight Beginnings
This week I started implementing the AI characters sight. I began my creating a basic scene with the 3D model from the Unity standard assets and putting in a first person controller so that I could move around the scene so I could test the Unity first person controller.
After I had put together my scene I found a website that I could look at different angles (Visnos, Unknown). I used this so that I could visually see and determine what angle each viewcone will be.
I then started looking at how I could detect the player if he moved within one of the view cones. I looked at using multiple raycasts that would be fired out across the viewcone. One problem I found with this was that I would be using a lot raycasts per viewcone and having 5 viewcones on the player would cause for a lot of raycast per frame (and would increase a lot if multiple enemies were to be put into one scene). Although I didn't go too much into detail of how costly this would be I decided against doing it this way as I want it to be as efficient as possible. I then went with a system that looks at the player position and AI characters position to calculate the angle between them. I then check this angle against the angle of the viewcone to make sure the player could be seen. Finally I shoot a raycast to check that no objects are blocking the AI characters view of the player.
I've made sure to go through and comment all of the code I have written along with making sure that I can easily understand what each part of the code does if I need to come back to it later. I also made sure to use variables for things such as viewcone distances and angles so that it can be easily customised depending on where the enemy would be used.
I plan to further develop the AIs sight over the next week allowing for things such as when you stand at the edge of a viewcone the AI becoming suspicious instead of the AI instantly becoming alert.
All of my work has been uploaded to my GitHub page: https://github.com/ABurton96/GameAI
References
Visnos (Unkown) Angle Visualiser. http://www.visnos.com/demos/basic-angles.
After I had put together my scene I found a website that I could look at different angles (Visnos, Unknown). I used this so that I could visually see and determine what angle each viewcone will be.
I then started looking at how I could detect the player if he moved within one of the view cones. I looked at using multiple raycasts that would be fired out across the viewcone. One problem I found with this was that I would be using a lot raycasts per viewcone and having 5 viewcones on the player would cause for a lot of raycast per frame (and would increase a lot if multiple enemies were to be put into one scene). Although I didn't go too much into detail of how costly this would be I decided against doing it this way as I want it to be as efficient as possible. I then went with a system that looks at the player position and AI characters position to calculate the angle between them. I then check this angle against the angle of the viewcone to make sure the player could be seen. Finally I shoot a raycast to check that no objects are blocking the AI characters view of the player.
I've made sure to go through and comment all of the code I have written along with making sure that I can easily understand what each part of the code does if I need to come back to it later. I also made sure to use variables for things such as viewcone distances and angles so that it can be easily customised depending on where the enemy would be used.
I plan to further develop the AIs sight over the next week allowing for things such as when you stand at the edge of a viewcone the AI becoming suspicious instead of the AI instantly becoming alert.
All of my work has been uploaded to my GitHub page: https://github.com/ABurton96/GameAI
References
Visnos (Unkown) Angle Visualiser. http://www.visnos.com/demos/basic-angles.
Friday, 25 November 2016
Planning
For my first weeks work I have planned out how the AI's finite state machine will look (Created a flow chart) and have created designs for the characters sight and hearing as well.
I've created a diagram showing how the viewcones for the characters AI will look. The viewcones will check for the player starting with the shortest of viewcones then going through each cone checking. If the player is see in the viewcone it will stop trying to find the player and the next part of the finite-state machine will begin. This is a system that is similar to that used in the game 'Thief' (Leonard, 2003).
I've also created a diagram to show how the AI's hearing will work. If will start by checking for the player inside of the trigger. If the player is found it will begin additional calculations to see how much noise the player is making and how far that noise has to travel to get to the player. If the distance is still close enough then it'll move onto the next part of the state machine.
I've also looked on the Unity asset store for any assets that will be useful for me when creating the AI. I will be using some standard assets (Unity, 2017) that have been created by Unity (Character model, animations and sounds) and will also be using ProBuilder basic (ProCore, 2017) that will allow me to quickly put together scenes and environments to test all the different mechanics that I will be making. I will keep an eye on the asset store while developing to see if there are any other useful resources that I can use to help with developing.
I have also created a GitHub page for my project so people can look at my Unity project and give any feedback if they feel things should be done differently or to use it to learn some techniques of creating AI.
GitHub page: https://github.com/ABurton96/GameAI
References
Leonard, T. (2003) Building an AI sensort System: Examining the Design of Thief: The Dark Project. Available at: http://www.gamasutra.com/view/feature/2888/building_an_ai_sensory_system_.php
ProCore. (2017) ProBuilder Basic. Available at: https://www.assetstore.unity3d.com/en/#!/content/11919.
Unity. (2017) Standard Assets. Available at: https://www.assetstore.unity3d.com/en/#!/content/32351.
By creating this flow chart it has allowed me to better visualise how the AI character will be thinking and what decisions he should be making next. I will try to keep this flow chart up to date with any changes I make during the development of the AI.
I've created a diagram showing how the viewcones for the characters AI will look. The viewcones will check for the player starting with the shortest of viewcones then going through each cone checking. If the player is see in the viewcone it will stop trying to find the player and the next part of the finite-state machine will begin. This is a system that is similar to that used in the game 'Thief' (Leonard, 2003).
I've also created a diagram to show how the AI's hearing will work. If will start by checking for the player inside of the trigger. If the player is found it will begin additional calculations to see how much noise the player is making and how far that noise has to travel to get to the player. If the distance is still close enough then it'll move onto the next part of the state machine.
I've also looked on the Unity asset store for any assets that will be useful for me when creating the AI. I will be using some standard assets (Unity, 2017) that have been created by Unity (Character model, animations and sounds) and will also be using ProBuilder basic (ProCore, 2017) that will allow me to quickly put together scenes and environments to test all the different mechanics that I will be making. I will keep an eye on the asset store while developing to see if there are any other useful resources that I can use to help with developing.
I have also created a GitHub page for my project so people can look at my Unity project and give any feedback if they feel things should be done differently or to use it to learn some techniques of creating AI.
GitHub page: https://github.com/ABurton96/GameAI
References
Leonard, T. (2003) Building an AI sensort System: Examining the Design of Thief: The Dark Project. Available at: http://www.gamasutra.com/view/feature/2888/building_an_ai_sensory_system_.php
ProCore. (2017) ProBuilder Basic. Available at: https://www.assetstore.unity3d.com/en/#!/content/11919.
Unity. (2017) Standard Assets. Available at: https://www.assetstore.unity3d.com/en/#!/content/32351.
Sunday, 20 November 2016
Final Project Proposal
I have written up my project proposal and have uploaded what I will be doing and my rationale for it. I have also uploaded my project milestones.
Proposed Final Project Title
Creating artificial intelligence characters for games inside of Unity.
Brief Outline of Work
I will be creating artificial intelligence characters (One prefab character than could be copied and variables adjusted if needed) inside of Unity that will be able to see, hear, move and will react to actions made by a player as if it was put into a complete game (As I will only be making the AI). I won’t be creating any of the models, textures, sounds or animations that will be used for the artificial character so I will be using materials found online instead, as I only want to focus on my programming ability.
I will be using a finite-state machine which will allow me to break down each of the artificial character’s behavioural elements the into flow charts and graphs for better visualisation. This should allow for a powerful system that will also be flexible allowing for iterations to the character’s behavioural system without too much additional work. I looked into using other methods for this, such as behavioural trees or having it work on a utility system however, I felt that a finite-state machine would be the best choice for my needs.
When any decision is made by the artificial intelligence character, the decision should be logical and consistent. By failing to do this, it will lead to the player becoming disillusioned and their enjoyment in the game would decrease for example like having the AI character stop their patrolling pattern for no reason and being needlessly unpredictable. Any choices that the AI makes, should take around 0.2 seconds to 0.4 seconds reaction time for it to seem realistic. A fluctuation in the timings will depend on if the AI has to differentiate between multiple things before it makes its decision and how far the player is from the enemy.
As I will be creating this inside of Unity, I will make use of Unity’s built in navigation system called navMesh. This will still give me full control on the AI’s movement and will allow me to set what it will and will not be allowed to navigate across. Although it is easier to implement than other pathfinding methods such as A* it is still very powerful and will give me full control of the AI’s movement. When moving, it should make no sharp turns and should not move at any illogical or unusual angles. This is to ensure that the movement looks realistic and is believable to be a humanoid character. Choices made on where the character will move, should take no more than a couple of frames to be decided. Any longer, will leave it awkwardly standing still thinking about where its next step should be, causing it to look unrealistic.
The artificial character’s ability to see the playable character, will work by using multiple ray casted view cones. These different view cones will have varying lengths and cone angles. The shortest cone length will have the widest angle and slowly changing to the final longest cone length having a narrow angle. My reasoning for adopting this approach, is to simulate a human’s actual vison, as human’s find it harder to see in its peripheral vision compared to something up close. This means the shorter view cones will instantly alert the AI to the player’s presence whereas the longer cones will only make them curious. All of this should take into fact the lighting of the scene around the player and on the player.
The ability to hear will come from a mix of triggers on the enemy as well as making use of Unity’s navMesh (Which is also being used for navigation). The sphere collider will be used to determine if the player is within a set distance from the AI character. This will then allow further calculations to determine if there are any walls blocking the sound from reaching the enemy. If there are any obstacles, it’ll use the path calculated using navMesh to determine how far it will have to travel to reach the character. Depending on how close the sound is to the character the AI’s awareness level will change.
All components for the artificial character should be programmed efficiently, be easy to read and should be commented throughout. This should make it easier for me when I go back to previously written code to adjust and iterate, as I’ll be able to determine which part of the code carry’s out each task quickly, helping in the overall AI development. All progress will be shared weekly on my blog with all code and the Unity project file shared on GitHub.
Rationale for The Project
The rational for my project is both for personal interest and for work interest. It’s a personal interest of mine because I enjoy programming and want to improve, learn new techniques, increase my knowledge of the JavaScript programming language and improve at using the Unity game engine. My aim, is also to get into good programming habits such as commenting my code constantly and writing the code as efficiently as I can. This project will not only allow me to create a good piece of work to use with my portfolio, it also has a work interest, as many companies are hiring programmers, which are an in demand job in game companies and in many IT related companies, to work on AI inside of their games and programmes. Many game companies would like you to have released a triple A title that you worked on AI for however, as I haven’t released any games at that level, I will be making the AI to a level that would be suitable for a bigger, complete game if I was to expand on it further. There is another skill that is common among programming jobs which I aim to develop as well. This is the ability to work efficiently, neatly and keep code well documented. By achieving this I hope this will help me when it comes to getting a job in the industry.
Proposed Final Project Title
Creating artificial intelligence characters for games inside of Unity.
Brief Outline of Work
I will be creating artificial intelligence characters (One prefab character than could be copied and variables adjusted if needed) inside of Unity that will be able to see, hear, move and will react to actions made by a player as if it was put into a complete game (As I will only be making the AI). I won’t be creating any of the models, textures, sounds or animations that will be used for the artificial character so I will be using materials found online instead, as I only want to focus on my programming ability.
I will be using a finite-state machine which will allow me to break down each of the artificial character’s behavioural elements the into flow charts and graphs for better visualisation. This should allow for a powerful system that will also be flexible allowing for iterations to the character’s behavioural system without too much additional work. I looked into using other methods for this, such as behavioural trees or having it work on a utility system however, I felt that a finite-state machine would be the best choice for my needs.
When any decision is made by the artificial intelligence character, the decision should be logical and consistent. By failing to do this, it will lead to the player becoming disillusioned and their enjoyment in the game would decrease for example like having the AI character stop their patrolling pattern for no reason and being needlessly unpredictable. Any choices that the AI makes, should take around 0.2 seconds to 0.4 seconds reaction time for it to seem realistic. A fluctuation in the timings will depend on if the AI has to differentiate between multiple things before it makes its decision and how far the player is from the enemy.
As I will be creating this inside of Unity, I will make use of Unity’s built in navigation system called navMesh. This will still give me full control on the AI’s movement and will allow me to set what it will and will not be allowed to navigate across. Although it is easier to implement than other pathfinding methods such as A* it is still very powerful and will give me full control of the AI’s movement. When moving, it should make no sharp turns and should not move at any illogical or unusual angles. This is to ensure that the movement looks realistic and is believable to be a humanoid character. Choices made on where the character will move, should take no more than a couple of frames to be decided. Any longer, will leave it awkwardly standing still thinking about where its next step should be, causing it to look unrealistic.
The artificial character’s ability to see the playable character, will work by using multiple ray casted view cones. These different view cones will have varying lengths and cone angles. The shortest cone length will have the widest angle and slowly changing to the final longest cone length having a narrow angle. My reasoning for adopting this approach, is to simulate a human’s actual vison, as human’s find it harder to see in its peripheral vision compared to something up close. This means the shorter view cones will instantly alert the AI to the player’s presence whereas the longer cones will only make them curious. All of this should take into fact the lighting of the scene around the player and on the player.
The ability to hear will come from a mix of triggers on the enemy as well as making use of Unity’s navMesh (Which is also being used for navigation). The sphere collider will be used to determine if the player is within a set distance from the AI character. This will then allow further calculations to determine if there are any walls blocking the sound from reaching the enemy. If there are any obstacles, it’ll use the path calculated using navMesh to determine how far it will have to travel to reach the character. Depending on how close the sound is to the character the AI’s awareness level will change.
All components for the artificial character should be programmed efficiently, be easy to read and should be commented throughout. This should make it easier for me when I go back to previously written code to adjust and iterate, as I’ll be able to determine which part of the code carry’s out each task quickly, helping in the overall AI development. All progress will be shared weekly on my blog with all code and the Unity project file shared on GitHub.
Rationale for The Project
The rational for my project is both for personal interest and for work interest. It’s a personal interest of mine because I enjoy programming and want to improve, learn new techniques, increase my knowledge of the JavaScript programming language and improve at using the Unity game engine. My aim, is also to get into good programming habits such as commenting my code constantly and writing the code as efficiently as I can. This project will not only allow me to create a good piece of work to use with my portfolio, it also has a work interest, as many companies are hiring programmers, which are an in demand job in game companies and in many IT related companies, to work on AI inside of their games and programmes. Many game companies would like you to have released a triple A title that you worked on AI for however, as I haven’t released any games at that level, I will be making the AI to a level that would be suitable for a bigger, complete game if I was to expand on it further. There is another skill that is common among programming jobs which I aim to develop as well. This is the ability to work efficiently, neatly and keep code well documented. By achieving this I hope this will help me when it comes to getting a job in the industry.
Sunday, 6 November 2016
Ideas
I have started researching into different ideas that I could do for my final project. I enjoy all aspects of creating games but more specifically programming them. As I enjoy programming the most I'm looking to work on this as I want to get better and create move advanced things. I have started researching several different programming projects that I might like to work on. These are AI, procedural generation and networking. Out of these the one that I like the most so far is AI.
Friday, 28 October 2016
Welcome to my final project blog
I'm a student in my final year of a BA (Hons) Computer Games Design course at University of Suffolk. I will be using the blog to show my progress on my final project. I will post weekly (Once the project has started) updates showing my progress and any problems I face while I work on my project.
Subscribe to:
Comments (Atom)



