Sunday, 20 November 2016

Final Project Proposal

I have written up my project proposal and have uploaded what I will be doing and my rationale for it. I have also uploaded my project milestones.

Proposed Final Project Title


Creating artificial intelligence characters for games inside of Unity.

Brief Outline of Work

I will be creating artificial intelligence characters (One prefab character than could be copied and variables adjusted if needed) inside of Unity that will be able to see, hear, move and will react to actions made by a player as if it was put into a complete game (As I will only be making the AI). I won’t be creating any of the models, textures, sounds or animations that will be used for the artificial character so I will be using materials found online instead, as I only want to focus on my programming ability.

I will be using a finite-state machine which will allow me to break down each of the artificial character’s behavioural elements the into flow charts and graphs for better visualisation. This should allow for a powerful system that will also be flexible allowing for iterations to the character’s behavioural system without too much additional work. I looked into using other methods for this, such as behavioural trees or having it work on a utility system however, I felt that a finite-state machine would be the best choice for my needs.

When any decision is made by the artificial intelligence character, the decision should be logical and consistent. By failing to do this, it will lead to the player becoming disillusioned and their enjoyment in the game would decrease for example like having the AI character stop their patrolling pattern for no reason and being needlessly unpredictable. Any choices that the AI makes, should take around 0.2 seconds to 0.4 seconds reaction time for it to seem realistic. A fluctuation in the timings will depend on if the AI has to differentiate between multiple things before it makes its decision and how far the player is from the enemy.

As I will be creating this inside of Unity, I will make use of Unity’s built in navigation system called navMesh. This will still give me full control on the AI’s movement and will allow me to set what it will and will not be allowed to navigate across. Although it is easier to implement than other pathfinding methods such as A* it is still very powerful and will give me full control of the AI’s movement. When moving, it should make no sharp turns and should not move at any illogical or unusual angles. This is to ensure that the movement looks realistic and is believable to be a humanoid character. Choices made on where the character will move, should take no more than a couple of frames to be decided. Any longer, will leave it awkwardly standing still thinking about where its next step should be, causing it to look unrealistic.

The artificial character’s ability to see the playable character, will work by using multiple ray casted view cones. These different view cones will have varying lengths and cone angles. The shortest cone length will have the widest angle and slowly changing to the final longest cone length having a narrow angle. My reasoning for adopting this approach, is to simulate a human’s actual vison, as human’s find it harder to see in its peripheral vision compared to something up close. This means the shorter view cones will instantly alert the AI to the player’s presence whereas the longer cones will only make them curious. All of this should take into fact the lighting of the scene around the player and on the player.

The ability to hear will come from a mix of triggers on the enemy as well as making use of Unity’s navMesh (Which is also being used for navigation). The sphere collider will be used to determine if the player is within a set distance from the AI character. This will then allow further calculations to determine if there are any walls blocking the sound from reaching the enemy. If there are any obstacles, it’ll use the path calculated using navMesh to determine how far it will have to travel to reach the character. Depending on how close the sound is to the character the AI’s awareness level will change.

All components for the artificial character should be programmed efficiently, be easy to read and should be commented throughout. This should make it easier for me when I go back to previously written code to adjust and iterate, as I’ll be able to determine which part of the code carry’s out each task quickly, helping in the overall AI development. All progress will be shared weekly on my blog with all code and the Unity project file shared on GitHub.



Rationale for The Project

The rational for my project is both for personal interest and for work interest. It’s a personal interest of mine because I enjoy programming and want to improve, learn new techniques, increase my knowledge of the JavaScript programming language and improve at using the Unity game engine. My aim, is also to get into good programming habits such as commenting my code constantly and writing the code as efficiently as I can. This project will not only allow me to create a good piece of work to use with my portfolio, it also has a work interest, as many companies are hiring programmers, which are an in demand job in game companies and in many IT related companies, to work on AI inside of their games and programmes. Many game companies would like you to have released a triple A title that you worked on AI for however, as I haven’t released any games at that level, I will be making the AI to a level that would be suitable for a bigger, complete game if I was to expand on it further. There is another skill that is common among programming jobs which I aim to develop as well. This is the ability to work efficiently, neatly and keep code well documented. By achieving this I hope this will help me when it comes to getting a job in the industry.




No comments:

Post a Comment