For my dissertation "Behavioural Differences in Intelligent Virtual Agents With and Without Active Vision Systems", I trained a virtual "hummingbot" (hummingbird bot :) ) in a virtual environment to locate and "drink" nectar from flowers using only it's eyes (RGB camera, active vision) without any form of external input and then compared the behaviour to a hummingbot with access to location and orientation of the flowers and its own position relative to the environment along with other inputs provided.
Read the thesis here.
An implementation of a Genetic Algorithm that I designed to maximise the revenue generated by a supermarket by finding the optimal price for 20 items, each ranging between £0.01 to £10.
Two solutions were attempted, one of which is exploitative (elitist approach) and the other one is an exploratory approach. Both the solutions use different crossover, mutation and selection criteria.
May the fittest solution survive B)
See on Github
a Deep learning program that converts Indian sign language to text in real time using openCV and CNN's. Three filters were applied to the RoI (region of interest / white box), to get the black and white image region which the neural network was trained on over 5 epochs. The dataset we consisted around 22,000 images in the training set and 11,000 images in the test dataset.
see on github
HTML Website Maker