Thesis/Project Final Exam Schedule

""

Final Examination Schedule

""

PLEASE JOIN US AS THE FOLLOWING CANDIDATES PRESENT THEIR CULMINATING WORK.

Spring 2022

Wednesday, May 4

Dat Tien Le

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Thesis: Emulated Autoencoder:  A Time-Efficient Image Denoiser for Defense of Convolutional Neural Networks against Evasion Attacks

As Convolutional Neural Networks (CNN) have become essential to modern applications such as image classification on social networks or self-driving vehicles, evasion attacks targeting CNNs can lead to damage for users. Therefore, there has been a rising amount of research focusing on defending against evasion attacks. Image denoisers have been used to mitigate the impact of evasion attacks; however, there is not a sufficiently broad view of the use of image denoisers as adversarial defenses in image classification due to a lack of trade-off analysis. Thus, trade-offs, including training time, image reconstruction time, and loss of benign F1-scores of CNN classifiers, of a group of image denoisers are explored in this thesis. Additionally, Emulated Autoencoder (EAE), which is the proposed method of this thesis to optimize trade-offs for high volume classification tasks, is evaluated alongside state-of-the-art image denoisers in both the gray-box and white-box threat model. EAE outperforms most image denoisers in both the gray-box and white-box threat models while drastically reducing training and image reconstruction time compared to the state-of-the-art denoisers.  As a result, EAE is more appropriate for securing high-volume classification applications of images.

Back to top

Monday, May 16

Matthew Sell

Chair: Dr. Marc Dupuis
Candidate: Master of Science in Cybersecurity Engineering

11:00 A.M.; Online
Project: Designing an Industrial Cybersecurity Program for an Operational Technology Group

The design of a cybersecurity program for an Information Technology (“IT”) group is well documented by a variety of international standards, such as those provided by the U.S. National Institute of Standards and Technology (“NIST”) 800-series Special Publications. However, for those wishing to apply standard information security practices in an Operational Technology (“OT”) environment that supports industrial control and support systems, guidance is seemingly sparse.

For example, a search of a popular online retailer for textbooks on the implementation of an industrial cybersecurity program revealed only seven books dedicated to the subject, with another two acting as “how-to” guides for exploiting vulnerabilities in industrial control systems. Some textbooks cover the high-level topics of developing such a program, but only describe the applicable standards, policies, and tools in abstract terms. It is left as an exercise to the reader to explore these concepts further when developing their own industrial cybersecurity program.

This project expands on the abstract concepts described in textbooks like those previously mentioned by documenting the implementation of an industrial cybersecurity program for a local manufacturing firm. The project started with hardware and software asset inventories, followed by a risk assessment and gap analysis, and then implemented mitigating controls using a combination of manual and automated procedures. Security posture of the OT group was constantly evaluated against corporate security goals, the project-generated risk assessment, and NIST SP800-171 requirements. Improvements in security posture and compliance to corporate requirements were achieved in part through alignment with existing policies and procedures developed by the organization’s IT group, with the balance implemented and documented by the author of this project. The materials generated by this project may be used to assist other organizations starting their journey towards securing their industrial control assets.

Back to top

Wednesday, May 18

Nirali Gundecha

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: Lambda and Reduction method implementation for MASS Library

The MASS is a parallelizing library that provides multi-agent and spatial simulation over a cluster of computing nodes. The goal of this capstone is to reduce the communication overhead for data and make the user experience effortless. Hence improving the efficiency of MASS.

This paper introduces new features, lambda and reduction methods, and implementation to achieve the goals. This feature is not implemented and provided to any agent-based library till the date. Hence making this sole contribution to agent-based library. This paper validates the lambda and reduce method and uses MASS library to do so.

Implementation of the lambda method library and provides users the flexibility of using the MASS library frictionlessly. Using lambda methods, user can describe their own new feature implementation on the fly and have results instantaneously. On top of the lambda feature, reduce method is responsible to perform reduce operation of any type of users' data or Agent data. The operation user wants to perform can be anything such as max, min or sum.

The data collection method is described as a lambda method. Using reduce method, user can perform tasks of reduction in single line of code that improves code reliability and clean code. These features remove the hassle of writing blocks of code and getting involved into agents’ behavior over cluster of nodes is distinctive as well as innovative. Lambda and reduce method implementation are revolutionary as this is unique contribution to agent-based library and their users.

Back to top

Pallavi Sharma

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Text Synthesis

With explosion of data in the digital domain, manual synthesis of long texts to extract important information is quite a laborious and time-consuming task. Mobile based text synthesis systems that can take text input and extract important information can be very handy and would reduce the overall time and effort required in manual text synthesis. In this work, a novel system is developed that facilitate users to extract summaries and keywords from long texts in real time using a cross-platform mobile application. The mobile application uses a hybrid approach based on feature extraction and unsupervised learning for generating quality summaries. In this paper, 10 sentence features are used for feature extraction. A hybrid technique based on machine learning with semantic methods is used to extract keywords/key-phrases from the source text. This application also allows users to manage, share and listen to the information extracted from the input text. Additional features like allowing users to draft-error free notes improve users’ experience. To test reliability of this system, experimental and research evaluation were carried out on DUC 2002 dataset using ROGUE parameters. Results demonstrate 51% F-Score which is higher than state of the art methods used for extractive summarization on the same dataset. The hybrid approach used for keyword/key-phrase extraction was tested from the validity of the resulting keywords. Application could produce proper keywords in the form of phrases and words with an accuracy of 70%. 

Back to top

Thursday, May 19

Zhiyang Zhou

Chair: Dr. Afra Mashhadi
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Project: Facial Recognition on Android Devices with Convolutional Neural Networks and Federated Learning

Machine Learning (ML) and Artificial Intelligence (AI) are widely applied in many modern services and products we use. Facial Recognition (FR) is a powerful ML application that has been used extensively in various fields. However, traditionally, the models are trained on photos crawled from the World Wide Web (WWW), and they are often biased towards celebrities and the caucasian population. Centralized Learning (CL), one of the most popular training techniques, requires all data to be on the central server to train ML models. However, it comes with additional privacy concerns as the server takes ownership of end-user data. In this project, we first use Convolutional Neural Networks (CNN) to develop an FR model that can classify 7 demographic groups using the FairFace image dataset. This has a more balanced and diverse distribution of ordinary face images across the racial groups. To further extend the training accessibility and protect sensitive personal data, we propose a novel Federated Learning (FL) system using Flower as the backend and Android phones as edge devices. These pre-trained models are initially converted to TensorFlow Lite models, which are then deployed to each Android phone to continue learning on-device from additional subsets of FairFace. Training takes place in real-time and only the weights are communicated to the server for model aggregation, thus separating user data from the server. In our experiments, we explore various centralized model architectures to achieve an initial accuracy of 52.9%, which is lightweight enough to continue improving to 68.6% in the Federated Learning environment. Application requirements on Android are also measured to validate the feasibility of our approach in terms of CPU, memory, and energy usage. As for future work, we hope the system can be scaled to enable training across thousands of devices and have a filtering algorithm to counter adversarial attacks.

Back to top

Friday, May 20

Vishnu Mohan

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: Automated Agent Migration Over Structured Data

Agent-based data discovery and analysis views big-data computing as the results of agent interactions over the data. It performs better onto a structured dataset by keeping the structure in memory and moving agents over the space. The key is how to automate agent migration that should simplify scientists’ data analysis. We implemented this navigational feature in multi-agent spatial simulation (MASS) library. First, this paper presents eight automatic agent navigation functions, each we identified, designed, and implemented in MASS Java. Second, we present the performance improvements made to existing agent lifecycle management functions that migrate, spawn and terminate agents. Third, we measure the execution performance and programmability of the new navigational functions in comparison to the previous agent navigation. The performance evaluation shows that the overall latency of all the four benchmark applications improved with the new functions. Programmability evaluation shows that new implementations reduced user line of codes (LOC), made the code more intuitive and semantically closer to the original algorithm. The project successfully carried out two goals: (1) design and implement automatic agent navigation functions and (2) make performance improvements to the current agent lifecycle management functions.

Jaynie A. Shorb

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Cybersecurity Engineering

11:00 A.M.; Hybrid (DISC 464 & Online)
Project: Malicious USB Cable Exposer

Universal Serial Bus (USB) cables are ubiquitous with many uses connecting a wide variety of devices such as audio, visual, and data entry systems and charging  batteries. Electronic devices have decreased in size over time and they are now small enough to fit within the housing of a USB connector. There are harmless 100W USB cables with embedded E-marker chips to communicate power delivery for sourcing and sinking current to charge mobile devices quickly. However, some companies have designed malicious hardware implants containing keyloggers and other nefarious programs in an effort to extract data from victims. Any system compromise that can be implemented with a keyboard is possible with vicious implants. This project designs a malicious hardware implant detector by sensing current draw from the USB cable which exposes these insideous designs. The Malicious USB Exposer is a hardware circuit implementation with common USB connectors to plug in the device under test (DUT). It provides power to the DUT and uses a current sensor to determine the current draw from the cable. The output is a red LED bargraph to show if the DUT is compromised. Unless, the DUT contains LEDs internally, any red LED output shows compromise. Active long USB cables intended to drive long distances produce a false positive and are not
supported. The minimum current sensed is 10mA which is outside the range of normal USB cables with LEDs (4-6mA), and E-Marker chips (1mA). Though there is another malicious USB detector on the market it is created by a malicious USB cable supplier and designed to detect their cable. This project provides an open source solution for distinguishing USB cables to uncover a range of compromised cables from different vendors.

Carl Anders Mofjeld

Chair: Dr. Yang Peng
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Adaptive Acceleration of Inference Services at the Network Edge

Deep neural networks (DNN) have enabled dramatic advancements in applications such as video analytics, speech recognition, and autonomous navigation. More accurate DNN models typically have higher computational complexity. However, many mobile devices do not have sufficient resources to complete inference tasks using the more accurate DNN models under strict latency requirements. Edge intelligence is a strategy that attempts to solve this issue by offloading DNN inference tasks from end devices to more powerful edge servers. Some existing works focus on optimizing the inference task allocation and scheduling on edge servers to help reduce the overall inference latency. One key aspect of the problem is that the number of requests, the latency constraints they have, and network connection quality will change over time. These factors all impact the latency budget for inference computation. As a result, the DNN model that maximizes inference quality while meeting latency constraints can change as well. To address this opportunity, other works have focused on dynamically adapting the inference quality. Most such works, though, do not solve the problem of how to allocate and schedule tasks across multiple edge servers, as the former group does.  In this work, we propose combining strategies from both areas of research to serve applications that use deep neural networks to perform inference on offloaded video frames. The goals of the system are to maximize the accuracy of inference results and the number of requests the edge cluster can serve while meeting latency requirements of the applications. To achieve the design goals, we propose heuristic algorithms to jointly adapt model quality and route inference requests, leveraging techniques that include model selection, dynamic batching, and frame resizing. We evaluated the proposed system with both simulated and testbed experiments. Our results suggest that by combining techniques from both areas of research, our system is able to meet these goals better than either approach alone.

Back to top

Monday, May 23

Ishpreet Talwar

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

9:00 A.M.; Online
Project: Recycle Helper - A Cross-Platform mobile application to Aid Recycling

With the growth of the population on the planet, the amount of waste generated has also increased. Such waste, if not handled correctly, can cause environmental issues. One of the solutions to this problem is Recycling. Recycling is the process of collecting and processing materials that would otherwise be thrown away as trash and turning them into new products. It can benefit the community and the environment. Recycling can be considered as an umbrella term for the 3R’s - Reduce, Reuse and Recycle. There are a variety of items that are present in the surrounding environment in different states/conditions which makes the process of recycling complex because having the knowledge of the correct way to recycle these items can be overwhelming and time-consuming. To help solve this problem to an extent, this paper proposes a cross-platform mobile application that promotes recycling. It helps users by providing them with recycling instructions for different product categories. The application allows the user to capture/choose an image of an item using a phone camera or gallery. It uses software engineering methodologies and machine learning to predict the item and provide the relevant recycle instructions. The application is able to detect and predict the items with an accuracy of 81.06%, using a Convolutional Neural Network (CNN) model.  To motivate and engage users for recycling, the application allows the user to set a monthly target goal for recycling, track its progress, and view their recycling history. The application is user-friendly and will help promote correct recycling in a less time-consuming manner.

Back to top

Wednesday, May 25

Yan Hong

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: Graph Streaming in MASS Java

This project is to facilitate graph streaming in agent-based big data computing where agents find a shape or attributes of a huge graph. Analyzing and processing massive graphs in general has become an important task in different domains because many real-world problems can be represented as graphs such as biological networks and neural networks. Those graphs can have millions of vertices and edges. It is quite challenging to process such a huge graph with limited resources as well as in a reasonable timeframe. The MASS (Muti-Agent Spatial Simulation) library has already supported graph data structure (GraphPlaces) which is distributed on a cluster of computing nodes. However, when processing a big graph, we may still encounter the following two problems. The first is the construction overhead that will delay the actual computation. The second is limited resources that slow down graph processing. To solve those two problems, we implemented graph streaming in MASS Java which repetitively reads a portion of a graph and processes it while reading the next graph portion. It supports HIPPIE and MATSim file formats as the input graph files. We also implemented two graph streaming benchmarks: Triangle Counting and Connected Components, to verify the correctness of and evaluate the performance of graph streaming. Those two programs were executed with 1 - 24 computing nodes, which demonstrates the significant CPU-scalable and memory-scalable performance improvements. We also compared the performance with the non-streaming solution. Graph streaming avoids the explosive growth of the agent population and loads only a small portion of a graph, both efficiently using limited memory space.

Back to top

Ankita Chakraborty

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Cybersecurity Engineering

3:30 P.M.; Online
Project: EXPLORING ADVERSARIAL ROBUSTNESS USING TEXTATTACK

Deep neural networks (DNNs) are subject to adversarial examples, that forces deep learning classifiers to make incorrect predictions of the input samples. In the visual domain, these perturbations are typically indistinguishable from human perception, resulting in disagreement between the classification done by people and state-of-the-art models. Small perturbations, on the other hand, are readily perceptible in the natural language domain, and the change of a single word might substantially affect the document's semantics. In our approach, we perform ablation studies to analyze the robustness of various attacks in NLP domain and formulate ways to alter the factor of “Robustness” leading to more diverse adversarial text attacks. This work heavily relies on TextAttack (a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP), for deducing the robustness of various models under attack from pre-existing or fabricated attacks. We offer various strategies to generate adversarial examples on text classification models which are anything but out of-context and unnaturally complex token replacements, easily identifiable by humans. We compare the results of our project with two baselines: Random and Pre-existing recipes. Finally, we conduct human evaluations on thirty-two volunteers with diverse backgrounds to guarantee semantic and grammatical coherence. Our research project proposes three novel attack recipes namely USEHomogyphSwap, InputReductionLeven and CompositeWordSwaps. Not only are these attacks able to reduce the prediction accuracy of current state-of-the-art deep-learning models to 0 % with the least number of queries, but also, they create crafted text that are visually imperceptible to human annotators to a great extent.

Back to top

Thursday, May 26

Brett Bearden

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: Redesigning the Virtual Academic Advisor System, Backend Optimizations, and Implementing a Python and Machine Learning Engine

Community College students have admiration of continuing their education at a 4-year college or university. The process of navigating college can be complex, let alone figuring out transfer requirements for individual schools. Assisting students in this process requires special knowledge for specific departments and majors. Lower budgeting colleges do not have funds for additional staff regarding academic advising, and the task gets passed to the teaching faculty.  Student academic planning is a time-consuming process that can detract from an instructor’s time needed to focus on their current courses and students. For years, a team of students at the University of Washington Bothell have been working on a Virtual Academic Advisor (VAA) system to automate the process of generating student academic plans in support of Everett Community College (EvCC). The goal of the VAA system is to reduce the amount of time an instructor sits with an individual student during academic advisement. However, the VAA system is not yet complete and there were a few roadblocks preventing it from moving forward. The work proposed in this capstone focusses on redesigning the previous VAA system to remove fundamental flaws in how data is stored related to scheduling academic plans. A new system architecture will be designed allowing to conduct backend optimizations. Cross-language support will give the VAA system the ability to communicate with Python for conducting machine learning research. The proposed work brings the VAA system closer to completion and ready for deployment to support EvCC. 

Sana Suse

Chair: Dr. Clark Olson
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Project: Classifying Urban Regions in Satellite Imagery Using the Bag of Words Methodology 

Satellite imagery has become more accessible over the years in terms of both availability and in quality, though the analysis of such images has not kept up at the same pace. To investigate the analysis process, this work explores the detection of urban area boundaries from satellite imagery. The ground truth values of these boundaries were collected from the U.S. Census Bureau’s geospatial urban area dataset and were used to train a classification model using the Bag of Words methodology. During training and testing, 1000x1000 pixel patches were used for classification. The resulting classification accuracy was between 85-90% and showed that urban areas were classified with higher confidence than non-urban areas. Most of the sub-images that were classified with a lower confidence are in the transition areas between urban and non-urban areas. In addition to low confidence in classifying these transition areas, these patch sizes are quite large. For this reason, they are not helpful to delineate granular details in the urban area boundaries.  

Back to top

Tianhui Nie

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Visualization of 2D Continuous Spaces and Trees in MASS Java

MASS is an Agent-Based Modeling (ABM) library. It supports parallelized simulation over a distributed computing cluster. The Place objects in these simulations can be thought of as the environment where agents interact with each other. Places can mimic different data structures to simulate various interaction environments, such as graphs, multi-dimensional arrays, trees, and continuous spaces.

However, the continuous spaces and trees are usually complex for programmers to debug and verify. So, this project will focus on how to visualize these data structures visually. These data structures are available in the MASS library. They can be instantiated at InMASS which enables Java’s JShell interface to execute codes line by line in an interactive fashion. InMASS has also facilitated additional functionalities including checkpoint, and rollback. These functionalities can help programmers to view their simulations better. MASS allows Places and agents to be transferred to the Cytoscape for their visualization. Cytoscape is an open-source network visualization tool initially developed to analyze biomolecular interaction networks. Expanded Cytoscape MASS plugins can build a MASS control panel on the Cytoscape application. It helps users to visualize graphs, continuous spaces, and trees at Cytoscape.

This project successfully realized the visualization of MASS binary trees, quad trees, and 2D continuous spaces with Cytoscape. It also enhanced MASS-Cytoscape integration and optimized the MASS control panel. From this project, these data structure visualizations provide an easier way for other users to learn the MASS library and debug their codes.

Back to top

Friday, May 27

Maré Sieling

Chair: Dr. Munehiro Fukuda
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: AGENT-BASED DATABASE WITH GIS

Geographic Information Systems (GIS) create, manage, analyse and maps data.  These systems are used to find relationships and patterns between different pieces of data in a geographically long distance.  GIS data can be extremely large and analysing the data can be laborious while consuming a substantial amount of resources.  By distributing the data and processing it in parallel, the system will consume less resources and improve performance.

The Multi-Agent Spatial Simulation (MASS) library applies agent-based modelling to big data analysis over distributed computing nodes through parallelisation.  GeoTools is a GIS system that is installed on a single node and processes data on that node.  By creating a distributed GIS from GeoTools with the MASS library, results are produced faster and more effectively than traditional GIS systems located on a single node. 

This paper discusses the efficacy of coupling GIS and MASS through agents that render fragments of feature data as layers on places, returning the fragments to be combined for a completed image.  It also discusses distributing and querying the data, returning results by running a query language (CQL).  Image quality is retained when panning and zooming without major loss of performance by rerendering visible sections of the map through agents and parallelisation.  Results show that coupling GIS and MASS significantly improves the efficiency and scalability of a GIS system.

Back to top

Liwen Fan

Chair: Dr. Kelvin Sung
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Realistic Fluid Rendering in Real-Time

Real-time realistic fluid rendering is important because fluid is ubiquitous and can be found in many Computer Generated Imagery (CGI) applications, such as video games and movies. However, realism in fluid rendering can be complex due to the fact that fluid does not have a concrete physical form or shape. There are many existing solutions in modeling the movement and the appearance of fluid. The movement of fluid focuses on simulating motions such as waves, ripples, and dripping. The appearance, or rendering, of fluid aims to reproduce the physical illumination process to include effects including reflection, refraction, and highlights. Since these solutions focus on addressing different aspects of modeling fluid, it is important to clearly understand application requirements when choosing among these.

This project focuses on the appearance, or the rendering, of fluid. We analyze existing solutions in detail and adopt the solution which is most suitable for real-time realistic rendering. With a selected solution, we explore implementation options based on modern graphics hardware. More specifically, we focused on graphics hardware that can be programmed through popular interactive graphical applications for the reasons of supporting interactive modeling, high-level shading language, and fast turnaround debugging cycles. The solution proposed by Van Der Laan et al., in their 2009 I3D research article is the choice of solution for this project. Our analysis shows that their approach is the most suitable because of the real-time performance, high-quality rendered results, and very importantly, provided implementation details.

The graphics system and hardware evaluation led to the Unity game engine. This is our choice of implementation platform due to its friendly interactive 3D functionalities, high-level shading language support, and support for efficient development cycles. In particular, the decision is based on Unity’s support of Scriptable Render Pipeline (SRP) functionality where the details of an image generation process can be highly customized. The SRP offers flexibility with ease of customizing shaders, and control of number of passes in processing the scene geometry for each generated image. In our implementation, the SRP is configured to compute the values to all of the parameters in the fluid model via separate rendering passes.

Our implementation is capable of rendering fluid realistically in real-time, where the users have control over the actual fluid appearance. The delivered system supports two types of simple fluid motion: waves and ripples. The rendered fluid successfully captures effects from the intrinsic color of the fluid under Fresnel reflection, the reflection of environmental elements, and, highlights from the light sources. In addition, to provide users with the full control on the rendered results, a friendly interface is supported. To demonstrate the system, we have configured to showcase our fluid rendering of some common conditions including swimming pool, muddy pond, green algae creek, and colored fluid in a flowery environment.

Back to top

Wednesday, June 1

Yilin Cai

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Computer Science & Software Engineering

11:00 A.M.; Online
Project: Model Extraction Attacks Effectiveness And Defenses

Machine learning is developing quickly in data industry and many technology companies who have the resources to collect huge datasets and train models are starting to provide services of pre-trained models for profits. The cost of training a good model for business use is expensive because huge training datasets may not be easily accessible and training the model itself requires a lot time and effort. The increased value of a pre-trained model motivates attackers to conduct model extraction attack, which focus on extracting valuable information from the target model or construct a clone close to the target model for free use by only making queries from the victim. The goal of this experiment is exploring the vulnerability of proposed model extraction attacks and evaluating the effectiveness of the attack by comparing the attack results when the victim model and its target datasets are more complex. We first construct datasets for the attacks by making queries to the victim model and some attacks propose to have certain strategies of selecting queries. Then, we execute the attack either by running it from scratch or using existing test framework. We run the attack with different victim models and datasets and compare the attack results. The results show that the attacks which extract information from a model are effective on simpler models but not on more complex models, and the difficulty of making a cheaper clone model will increase and the attacker may need more knowledge besides query info from the victim when the victim model and its target datasets are more complex. Potential defenses and their weakness are also discussed after the experiment. 

Back to top

Rochelle Palting

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Cybersecurity Engineering

1:15 P.M.; Online
Project: A Methodology for Testing Intrusion Detection Systems for Advanced Persistent Threat Attacks

Advanced Persistent Threats (APTs) are well-resourced, highly-skilled, adaptive, malicious actors who pose a major threat to the security of an organization's critical infrastructure and sensitive data.  An Intrusion Detection System (IDS) is one type of mechanism used in detecting attacks. Testing with a current and realistic intrusion dataset, promptly detecting and correlating malicious behavior at various attack stages, and utilizing relevant metrics are critical in effectively testing an IDS for APT attack detection. Testing with outdated and unrealistic data would yield results unrepresentative of the IDS's detection ability of real-world APT attacks. In this project, we present a testing methodology utilizing our recommended procedure for preparing the intrusion dataset along with recommended evaluation metrics. Our proposed testing methodology incorporates a software program we develop which dynamically retrieves real-world intrusion examples compiled in the MITRE ATT&CK knowledge base, presents the list of known APT tactics and techniques for user selection into their scenario, and exports the attack scenario to an output file consisting of the selected APT tactics and techniques. Our testing methodology, along with attack scenario generator, provide IDS testers with guidance in testing with a current and realistic dataset and with additional evaluation data points to improve their IDS under test. The benefits IDS testers are afforded include time saved in dataset preparation and improved reliability in their IDS APT detection evaluation.

Back to top

Pratik Goswami

Chair: Dr. William Erdly
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Project: Virtual Reality based Orthoptics for Binocular Vision Disorders

The Center for Disease Control noted that approximately 6.8% of children under the age of 18 years in the United States are diagnosed with vision problems significant enough to impact learning. Binocular Disorders can lead to headaches, blurry vision, double vision, loss of coordination, fatigue, and the inability to track objects, thereby severely impacting a child’s ability to learn. Without intervention, vision problems can lead to suppression of vision in the affected eye. Vision Therapy or Orthoptics is meant to help individuals recover their eyesight. It  aims to retrain the user to achieve Binocular Fusion using therapeutic exercises. Binocular Fusion refers to the phenomenon of perceiving a single fused image when presented with 2 images in each eye. Virtual Reality (VR) shows a lot of potential as an orthoptics medium. VR headsets can isolate the user from the physical world, reduce real world distractions, provide a dichoptic display where each eye can be presented with a different input, and provide a customized therapy experience for the user.

Although several VR applications exist with a focus on orthoptics, clinicians report that these applications fail to strike a balance between therapy and entertainment. These applications can be too entertaining for the user and thus distract them from the therapy goals.

As a part of the EYE Research Group, I have developed 2 applications which when added to the previously developed applications make a VR toolkit to provide vision therapy to individuals diagnosed with Binocular Disorders.  Each application in the toolkit focuses on a level of Binocular Fusion. The 2 applications I developed focuses on the third and fourth level of fusion – Sensory Fusion and Motor Fusion. The project has been successfully developed using Unity Game Engine along with the Oculus VR plugin. All decisions about the controls and features have been made after the analysis of the feedback and interview of the therapists at the EYE See Clinic. Key design decisions have also been the outcome of the demonstration and trial of the prototypes at the ACTION Forum 2021. The forum was attended by therapists, students and researchers in the field of orthoptics.

Although the applications have been successfully developed and have been approved by the therapists at the EYE See Clinic, a clinical study is required to test the usability and the effectiveness of the tools as a therapy tool. As of May 16th, 2022, all applications have been successfully developed, tested, and approved by Dr. Alan Pearson, the clinical advisor to the EYE Research Group. A case study was proposed, reviewed and approved by the UW IRB and the UW Human Subjects Division (HSD) board. The results of the study will be beneficial for future research.

Back to top

Franz Anthony Varela

Chair: Dr. Michael Stiber
Candidate: Master of Science in Computer Science & Software Engineering

5:45 P.M.; Online
Thesis: The Effects of Hybrid Neural Networks on Meta-Learning Objectives

Historically, models do not generalize well when they are trained solely on a dataset/task's objective, despite the plethora of data and computing available in the modern digital era. We propose that this is at least partially because the representations of the model are inflexible when learned in this setting; in this paper, we experiment with a hybrid neural network architecture that has an unsupervised model at its head (the Knowledge Representation module) and a supervised model at its tail (the Task Inference module) with the idea that we can supplement the learning of a set of related tasks with a reusable knowledge base. We analyze the two-part model in the contexts of transfer learning, few-shot learning, and curriculum learning, and train on the MNIST and SVHN datasets. The results of the experiment demonstrate that our architecture on average achieves a similar test accuracy as the E2E baselines, and sometimes marginally better in certain experiments depending on the subnetwork combination.

Back to top

Thursday, June 2

Christopher Coy

Chair: Dr. Geethapriya Thamilarasu
Candidate: Master of Science in Cybersecurity Engineering

1:15 P.M.; Online
Project: Multi-platform User Activity Digital Forensics Intelligence Collection

In today’s interconnected world, computing devices are employed for all manner of professional and personal activity, from implementing business processes and email communications to online shopping and web browsing.  While most of this activity is legitimate, there are user actions that violate corporate policy or constitute criminal activity, such as clicking a link in a phishing email or downloading child sexual abuse material.

When a user is suspected of violating policies or law, a digital forensic analyst is typically brought in to investigate the traces of user activity on a system in an effort to confirm or refute the suspected activity.

Digital forensics analysts need the capability to quickly and easily collect and process key user activity artifacts that enable rapid analysis and swift decision making. The FORINT project was developed to provide digital forensics analysts with this very capability across multiple operating systems.

Nhut Phan

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering

3:30 P.M.; Online
Thesis: Deep Learning Methods to Identify Intracranial Hemorrhage Using Tissue Pulsatility Ultrasound Imaging

Traumatic Brain Injury (TBI) is a serious medical condition when a person experiences trauma in the head, resulting in intracranial hemorrhage (bleeding) and potential deformation of head-enclosed anatomical structures. Detecting these abnormalities early is the key to saving lives and improving survival outcomes. Standard methods of detecting intracranial hemorrhage are Computed Tomography (CT) and Magnetic Resonant Imaging (MRI). However, they are not readily available on the battlefield and in low-income settings. A team of researchers from the University of Washington developed a novel ultrasound signal processing technique called Tissue Pulsatility Imaging (TPI) that operates on raw ultrasound data collected using a hand-held tablet-like ultrasound device. This research work aims to build segmentation deep-learning models that take the input TPI data and detect the skull, ventricles, and intracranial hemorrhage in a patient's head. We employed the U-Net architecture and four of its variants for this purpose. Results show that the proposed methods can segment the brain-enclosing skull and is relatively successful in ventricle detection, while more work is needed to produce a model that can reliably segment intracranial hemorrhage.

Back to top

Friday, June 3

Monali Khobragade

Chair: Dr. Min Chen
Candidate: Master of Science in Computer Science & Software Engineering

8:45 A.M.; Online
Project: EcoTrip Planner – An Android App

The emergence of online travel websites like TripAdvisor, Priceline, Expedia, and KAYAK allowed users to get an experience of booking the accommodation online without any hassle with the agent. Users are no longer waiting in queues to get flight tickets to their favorite destinations. They can also get enough idea about the vacation destination over the online travel websites, which was earlier depended solely on the agent’s guidance. Users can book flights, hotels, and restaurants using these online websites. In short, using online travel websites, they can plan a vacation trip after manually evaluating all the options like price, flight timings and availability, hotel location, food options, and nearby locations to checkout. However, a recent study indicates that abundant options available in online travel agencies are overwhelming to users. The main challenge is that these online travel websites do not provide a holistic trip plan including flight and hotel accommodation under the user's budget. In this paper, we intend to provide a trip plan with flight travel and hotel stay suggestions under the user's given budget by using personalized factors and analyzing user experience. The aim of this project is to develop an android mobile application that will help users plan trips under a given budget and help fight information overload. Our approach in this application asks users about the vacation destination and the budget amount they can afford. It also asks users about their preferred hotel location, hotel stars, and ratings. It then analyzes the budget and uses heuristic models and natural language processing to recommend the best available travel and lodging. For travel, it suggests the round-trip plan from current location to destination, and for hotels, it suggests the top 3 hotels with a personalized user experience. This system also extracts the top 5 keywords from the hotel reviews. These keywords allow users to get an overall idea about the hotel. Our approach in this android application will help users to plan the trip including flight travel and hotel accommodation in minutes.

Back to top

Sarika Ramesh Bharambe

Chair: Dr. Brent Lagesse
Candidate: Master of Science in Cybersecurity Engineering

11:00 A.M.; Online
Project: New Approach towards Self-destruction of Data in Cloud

One of the most pressing issues faced by cloud service industry is ensuring data privacy and security. Dealing with data in a cloud environment that leverages shared resources, as well as offering reliable and secure cloud services, necessitates a strong encryption solution that has no or minimal performance impact. One of the approaches towards this issue is to introduce self-destruction of data which mainly aims at protecting the shared data. Encrypting files is a simple way to protect personal or commercial data. Using a hybrid RSA AES algorithm, we propose a time-based self-destruction method to address the above difficulties and improve file encryption performance and security using file split functionality. Each data owner must set an expiration limit on the contents for collaboration which will initialize after uploading file to the cloud. Once a user-specified expiration period has passed, the sensitive information is securely self-destructed.

In this approach we have introduced on how to use channels on clouds which will help increase the data security as we split the bits of each word and upload it in encrypted format. For this purpose, we are using ThingSpeak, a cloud platform used for visualization, analyzation and sharing data through public and private channels. We experimentally test the performance overhead of our approach with ThingSpeak and use realistic tests to demonstrate the viability of our solution for enhancing the security of cloud-based data storage. For encryption and decryption technique we have used Hybrid RSA AES algorithm. Through results of various experiments performed, we can conclude that this algorithm has higher efficiency, increased accuracy, better performance, and security benefits.

William Otto Thomas

Chair: Dr. Erika Parsons
Candidate: Master of Science in Computer Science & Software Engineering

1:15 P.M.; Online
Thesis: Human Cranium, Brain Ventricle and Blood Detection Using Machine Learning on Ultrasound Data 

Any head related injury can be very serious and may be classified as a traumatic brain injury (TBI), which can be a result of intracranial hemorrhaging.  TBI is one of the most common injuries in or around a battlefield, which can be caused by both direct and indirect impacts. While assessing a brain injury in a well-equipped hospital is typically a trivial task, the same cannot be said about a TBI assessment in a non-hospital environment. Typically, a computer tomography (CT) machine is used to diagnose TBI. However, this project demonstrates the use of ultrasound and how it can be used to predict where skull, ventricles, and bleeding occur. The Pulsatility Research Group at the University of Washington has conducted three years of data collection and research to create a procedure that diagnoses TBI in a field situation. In this paper, machine learning methodologies will be used to predict these CT derived features.  The result of this research shows that with adequate data and collection methods skull, ventricles, and potentially blood can be detected while applying machine learning to ultrasound obtained data.  

Back to top

Questions: Please email cssgrad@uw.edu