christopher_blackman
- Nerd
christopher --masters
computational_geometry
christopher --developer
christopher --educator
christopher --skill
[type]
Christopher is a Subject Matter Expert in Computational Geometry with a Masters in Computer Science. The lab he studied with is the Computational Geometry Lab focusing on applications Such as Geographic Information Systems (GIS), robotics, and algorithms with a geometric focus. Throughout his studies, he has focused more on the analysis of algorithms: algorithms that handle large volumes of data; deterministic algorithms; randomized algorithms; algorithms in a geometric space, classification algorithms; algorithms in a distributed environment where there is no central computing device; or algorithms in a centralized environment where parallel processing and threading are available. Furthermore, in his graduate years, he has assisted in teaching courses on algorithms, AI, and systems programming.
--masters
computational_geometry Christopher is a Subject Matter Expert in Computational Geometry who has studied a variety of topics involving range queries, point location, triangulation algorithms, convex hulls, routing algorithms, visibility graphs, polygon intersection, etc. Some of these topics have ties to GIS applications, where answering questions based on geological information, and building the tools to answer the questions are important. Other topics such as robotics involve routing one or more robots in a constrained environment in-order to complete some objective. These are some topics related to Christopher's field of study.
Christopher obtained his masters under the supervision of Prosenjit Bose, and Jean-Lou De Carufel at Carleton University in the computational geometry lab. Christopher's primary research focuses on distributed computing problems where in the past he has worked on Zombies and Survivors inside simple polygons (here), searching on a line on an arrangement of lines, and searching on an infinite line in an asynchronous rendezvous of two agents (here).
--developer
As a developer, Christopher has experience in multiple languages: python, C, C++, javascript, java, bash, regular expressions. Most of his projects have been completed under a Linux environment undertaken with either python or C; furthermore Christopher has taught courses pertaining to systems programming. In the development of his projects Christopher employs version control tools such as git, and for systems programming valgrind, and gdb.
--educator
Christopher has worked as a teaching assistant at Carleton University for three years. He has taught: Systems Programming teaching data representation, memory management, concurrent programming, and file I/O; Artificial Intelligence teaching heuristic search, Bayes theorem and Bayesian inference, reinforcement learning, and neural networks; and Algorithms teaching sorting, searching, divide-and-conquer, dynamic programming, graph algorithms, and NP-completeness.
--skill
algorithm_theory--skill
computational_geometry--skill
distributed_computing--skill
artificial_intelligence --skill
automata_theory The Facility Location Problem (FLP) is a problem about finding an optimal placement of facilities such that the placement minimizes (or maximizes) the distance of a population to the closest facility. For example, you would want to place your hospitals closest to your population such that you reduce the maximum travel time.
An example of the FLP can be seen on Github. The program uses Gradient Descent in Conjunction with Voronoi Diagrams to Optimize Facility Placement. Furthermore, the results were compared with the Optimal Solution comparing Run Times, and Performance.
Implemented a Variety of Classification Models: Decision Trees, Binary-Naive Bayes, Gaussian-Naive Bayes, and Dependency Tree Classification, Based on Conditional Bayesian Probability. Tested Models on Generated Test Sets and Real Glass Test Sets. The project can be seen on Gitlab.
Implemented Several Classifiers for Filtering Email SPAM : Feed Forward Neural Networks, Multinomial Naive Bayes, and Random Forest. Tested Different Normalization Techniques Testing the Affects on Classification and Performance. The project can be seen on Github.
Page Rank is an algorithm based on a random walks on a graph. Where the nodes of the graph are web pages, and the edges of the graph are links between the web pages. One can make this graph a Markov chain by counting the links to a page, and adding some randomness. The page where the random walk stops the most is the highest ranked page. This is generally represented by the eigenvalues generated from the Markov chain.
The task of Finding Significant Papers Given a Set Of Unlabelled Papers is Similar to Page Rank, where each paper is a "node" and each citation is an "edge" of the graph. Thus the implementation was based on the references of the papers, where we compared the Ranking with Respect to the h-index of the Papers. The Project Can be Seen on Gitlab.