ItemImproving Object Detection using Enhanced EfficientNet Architecture(2023-08) Kamel Ibrahim, Michael; El-Sharkawy, Mohamed; King, Brian; Rizkalla, MaherEfficientNet is designed to achieve top accuracy while utilizing fewer parameters, in addition to less computational resources compared to previous models. In this paper, we are presenting compound scaling method that re-weight the network’s width (w), depth(d), and resolution (r), which leads to better performance than traditional methods that scale only one or two of these dimensions by adjusting the hyperparameters of the model. Additionally, we are presenting an enhanced EfficientNet Backbone architecture. We show that EfficientNet achieves top accuracy on the ImageNet dataset, while being up to 8.4x smaller and up to 6.1x faster than previous top performing models. The effec- tiveness demonstrated in EfficientNet on transfer learning and object detection tasks, where it achieves higher accuracy with fewer parameters and less computation. Henceforward, the proposed enhanced architecture will be discussed in detail and compared to the original architecture. Our approach provides a scalable and efficient solution for both academic research and practical applications, where resource constraints are often a limiting factor. ItemDeep Reinforcement Learning of IoT System Dynamics for Optimal Orchestration and Boosted Efficiency(2023-08) Shi, Haowei; Zhang, Qingxue; King, Brian; Fang, ShiaofenThis thesis targets the orchestration challenge of the Wearable Internet of Things (IoT) systems, for optimal configurations of the system in terms of energy efficiency, computing, and data transmission activities. We have firstly investigated the reinforcement learning on the simulated IoT environments to demonstrate its effectiveness, and afterwards studied the algorithm on the real-world wearable motion data to show the practical promise. More specifically, firstly, challenge arises in the complex massive-device orchestration, meaning that it is essential to configure and manage the massive devices and the gateway/server. The complexity on the massive wearable IoT devices, lies in the diverse energy budget, computing efficiency, etc. On the phone or server side, it lies in how global diversity can be analyzed and how the system configuration can be optimized. We therefore propose a new reinforcement learning architecture, called boosted deep deterministic policy gradient, with enhanced actor-critic co-learning and multi-view state transformation. The proposed actor-critic co-learning allows for enhanced dynamics abstraction through the shared neural network component. Evaluated on a simulated massive-device task, the proposed deep reinforcement learning framework has achieved much more efficient system configurations with enhanced computing capabilities and improved energy efficiency. Secondly, we have leveraged the real-world motion data to demonstrate the potential of leveraging reinforcement learning to optimally configure the motion sensors. We used paradigms in sequential data estimation to obtain estimated data for some sensors, allowing energy savings since these sensors no longer need to be activated to collect data for estimation intervals. We then introduced the Deep Deterministic Policy Gradient algorithm to learn to control the estimation timing. This study will provide a real-world demonstration of maximizing energy efficiency wearable IoT applications while maintaining data accuracy. Overall, this thesis will greatly advance the wearable IoT system orchestration for optimal system configurations. ItemDeep Brain Dynamics and Images Mining for Tumor Detection and Precision Medicine(2023-08) Ramesh, Lakshmi; Zhang, Qingxue; King, Brian; Chen, YaobinAutomatic brain tumor segmentation in Magnetic Resonance Imaging scans is essential for the diagnosis, treatment, and surgery of cancerous tumors. However, identifying the hardly detectable tumors poses a considerable challenge, which are usually of different sizes, irregular shapes, and vague invasion areas. Current advancements have not yet fully leveraged the dynamics in the multiple modalities of MRI, since they usually treat multi-modality as multi-channel, and the early channel merging may not fully reveal inter-modal couplings and complementary patterns. In this thesis, we propose a novel deep cross-attention learning algorithm that maximizes the subtle dynamics mining from each of the input modalities and then boosts feature fusion capability. More specifically, we have designed a Multimodal Cross-Attention Module (MM-CAM), equipped with a 3D Multimodal Feature Rectification and Feature Fusion Module. Extensive experiments have shown that the proposed novel deep learning architecture, empowered by the innovative MM-CAM, produces higher-quality segmentation masks of the tumor subregions. Further, we have enhanced the algorithm with image matting refinement techniques. We propose to integrate a Progressive Refinement Module (PRM) and perform Cross-Subregion Refinement (CSR) for the precise identification of tumor boundaries. A Multiscale Dice Loss was also successfully employed to enforce additional supervision for the auxiliary segmentation outputs. This enhancement will facilitate effectively matting-based refinement for medical image segmentation applications. Overall, this thesis, with deep learning, transformer-empowered pattern mining, and sophisticated architecture designs, will greatly advance deep brain dynamics and images mining for tumor detection and precision medicine. ItemNon-intrusive Wireless Sensing with Machine Learning(2023-08) Xie, Yucheng; Li, Lingxi; Li, Feng; Guo, Xiaonan; King, BrianThis dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Signiﬁcant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies. ItemVR-Based Testing Bed for Pedestrian Behavior Prediction Algorithms(2023-08) Armin, Faria; Tian, Renran; Chen, Yaobin; Li, LingxiUpon introducing semi- and fully automated vehicles on the road, drivers will be reluctant to focus on the traffic interaction and rely on the vehicles' decision-making. However, encountering pedestrians still poses a significant difficulty for modern automated driving technologies. Considering the high-level complexity in human behavior modeling to solve a real-world problem, deep-learning algorithms trained from naturalistic data have become promising solutions. Nevertheless, although developing such algorithms is achievable based on scene data collection and driver knowledge extraction, evaluation remains challenging due to the potential crash risks and limitations in acquiring ground-truth intention changes. This study proposes a VR-based testing bed to evaluate real-time pedestrian intention algorithms as VR simulators are recognized for their affordability and adaptability in producing a variety of traffic situations, and it is more reliable to conduct human-factor research in autonomous cars. The pedestrian wears the head-mounted headset or uses the keyboard input and makes decisions in accordance with the circumstances. The simulator has added a credible and robust experience, essential for exhibiting the real-time behavior of the pedestrian. While crossing the road, there exists uncertainty associated with pedestrian intention. Our simulator will anticipate the crossing intention with consideration of the ambiguity of the pedestrian behavior. The case study has been performed over multiple subjects in several crossing conditions based on day-to-day life activities. It can be inferred from the study outcomes that the pedestrian intention can be precisely inferred using this VR-based simulator. However, depending on the speed of the car and the distance between the vehicle and the pedestrian, the accuracy of the prediction can differ considerably in some cases. ItemCritical Zone Calculation For Automated Vehicles Using Model Predictive Control(2022-05) Glasky, Enimini Theresa; Li, Lingxi; Chen, Yaobin; King, BrianThis thesis studies critical zones of automated vehicles. The goal is for the automated vehicle to complete a car-following or lane change maneuver without collision. For instance, the automated vehicle should be able to indicate its interest in changing lanes and plan how the maneuver will occur by using model predictive control theory, in addition to the autonomous vehicle toolbox in Matlab. A test bench (that includes a scenario creator, motion logic and planner, sensors, and radars) is created and used to calculate the parameters of a critical zone. After a trajectory has been planned, the automated vehicle then attempts the car following or lane change while constantly ensuring its safety to continue on this path. If at any point, the lead vehicle brakes or a trailing vehicle accelerates, the automated vehicle makes the decision to either brake, accelerate, or abandon the lane change. ItemRecommendation Systems in Social Networks(2023-05) Mohammad Jafari, Behafarid; King, Brian; Luo, Xiao; Jafari, Ali; Zhang, QingxueThe dramatic improvement in information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and intelligent services. CourseNet working is a next-generation LMS adopting machine learning to add personalization, gamifi cation, and more dynamics to the system. This work tries to come up with two recommender systems that can help improve CourseNetworking services. The first one is a social recommender system helping CourseNetworking to track user interests and give more relevant recommendations. Recently, graph neural network (GNN) techniques have been employed in social recommender systems due to their high success in graph representation learning, including social network graphs. Despite the rapid advances in recommender systems performance, dealing with the dynamic property of the social network data is one of the key challenges that is remained to be addressed. In this research, a novel method is presented that provides social recommendations by incorporating the dynamic property of social network data in a heterogeneous graph by supplementing the graph with time span nodes that are used to define users long-term and short-term preferences over time. The second service that is proposed to add to Rumi services is a hashtag recommendation system that can help users label their posts quickly resulting in improved searchability of content. In recent years, several hashtag recommendation methods are proposed and de veloped to speed up processing of the texts and quickly find out the critical phrases. The methods use different approaches and techniques to obtain critical information from a large amount of data. This work investigates the efficiency of unsupervised keyword extraction methods for hashtag recommendation and recommends the one with the best performance to use in a hashtag recommender system. ItemPlant Level IIoT Based Energy Management Framework(2023-05) Koshy, Liya Elizabeth; Chien, Stanley Yung-Ping; Chen, Jie; King, BrianThe Energy Monitoring Framework, designed and developed by IAC, IUPUI, aims to provide a cloud-based solution that combines business analytics with sensors for real-time energy management at the plant level using wireless sensor network technology. The project provides a platform where users can analyze the functioning of a plant using sensor data. The data would also help users to explore the energy usage trends and identify any energy leaks due to malfunctions or other environmental factors in their plant. Additionally, the users could check the machinery status in their plant and have the capability to control the equipment remotely. The main objectives of the project include the following: • Set up a wireless network using sensors and smart implants with a base station/ controller. • Deploy and connect the smart implants and sensors with the equipment in the plant that needs to be analyzed or controlled to improve their energy efficiency. • Set up a generalized interface to collect and process the sensor data values and store the data in a database. • Design and develop a generic database compatible with various companies irrespective of the type and size. • Design and develop a web application with a generalized structure. Hence the database can be deployed at multiple companies with minimum customization. The web app should provide the users with a platform to interact with the data to analyze the sensor data and initiate commands to control the equipment. The General Structure of the project constitutes the following components: • A wireless sensor network with a base station. • An Edge PC, that interfaces with the sensor network to collect the sensor data and sends it out to the cloud server. The system also interfaces with the sensor network to send out command signals to control the switches/ actuators. • A cloud that hosts a database and an API to collect and store information. • A web application hosted in the cloud to provide an interactive platform for users to analyze the data. The project was demonstrated in: • Lecture Hall (https://iac-lecture-hall.engr.iupui.edu/LectureHallFlask/). • Test Bed (https://iac-testbed.engr.iupui.edu/testbedflask/). • A company in Indiana. The above examples used sensors such as current sensors, temperature sensors, carbon dioxide sensors, and pressure sensors to set up the sensor network. The equipment was controlled using compactable switch nodes with the chosen sensor network protocol. The energy consumption details of each piece of equipment were measured over a few days. The data was validated, and the system worked as expected and helped the user to monitor, analyze and control the connected equipment remotely. ItemMulti-spectral Fusion for Semantic Segmentation Networks(2023-05) Edwards, Justin; El-Sharkawy, Mohamed; King, Brian; Kim, DongsooSemantic segmentation is a machine learning task that is seeing increased utilization in multiples fields, from medical imagery, to land demarcation, and autonomous vehicles. Semantic segmentation performs the pixel-wise classification of images, creating a new, seg- mented representation of the input that can be useful for detected various terrain and objects within and image. Recently, convolutional neural networks have been heavily utilized when creating neural networks tackling the semantic segmentation task. This is particularly true in the field of autonomous driving systems. The requirements of automated driver assistance systems (ADAS) drive semantic seg- mentation models targeted for deployment on ADAS to be lightweight while maintaining accuracy. A commonly used method to increase accuracy in the autonomous vehicle field is to fuse multiple sensory modalities. This research focuses on leveraging the fusion of long wave infrared (LWIR) imagery with visual spectrum imagery to fill in the inherent perfor- mance gaps when using visual imagery alone. This comes with a host of benefits, such as increase performance in various lighting conditions and adverse environmental conditions. Utilizing this fusion technique is an effective method of increasing the accuracy of a semantic segmentation model. Being a lightweight architecture is key for successful deployment on ADAS, as these systems often have resource constraints and need to operate in real-time. Multi-Spectral Fusion Network (MFNet)  accomplishes these parameters by leveraging a sensory fusion approach, and as such was selected as the baseline architecture for this research. Many improvements were made upon the baseline architecture by leveraging a variety of techniques. Such improvements include the proposal of a novel loss function categori- cal cross-entropy dice loss, introduction of squeeze and excitation (SE) blocks, addition of pyramid pooling, a new fusion technique, and drop input data augmentation. These improve- ments culminated in the creation of the Fast Thermal Fusion Network (FTFNet). Further improvements were made by introducing depthwise separable convolutional layers leading to lightweight FTFNet variants, FTFNet Lite 1 & 2. 13 The FTFNet family was trained on the Multi-Spectral Road Scenarios (MSRS) and MIL- Coaxials visual/LWIR datasets. The proposed modifications lead to an improvement over the baseline in mean intersection over union (mIoU) of 2.92% and 2.03% for FTFNet and FTFNet Lite 2 respectively when trained on the MSRS dataset. Additionally, when trained on the MIL-Coaxials dataset, the FTFNet family showed improvements in mIoU of 8.69%, 4.4%, and 5.0% for FTFNet, FTFNet Lite 1, and FTFNet Lite 2. ItemDeep Image Processing with Spatial Adaptation and Boosted Efficiency & Supervision for Accurate Human Keypoint Detection and Movement Dynamics Tracking(2023-05) Dai, Chao Yang; Zhang, Qingxue; King, Brian S.; Fang, ShiaofenThis thesis aims to design and develop the spatial adaptation approach through spatial transformers to improve the accuracy of human keypoint recognition models. We have studied different model types and design choices to gain an accuracy increase over models without spatial transformers and analyzed how spatial transformers increase the accuracy of predictions. A neural network called Widenet has been leveraged as a specialized network for providing the parameters for the spatial transformer. Further, we have evaluated methods to reduce the model parameters, as well as the strategy to enhance the learning supervision for further improving the performance of the model. Our experiments and results have shown that the proposed deep learning framework can effectively detect the human key points, compared with the baseline methods. Also, we have reduced the model size without significantly impacting the performance, and the enhanced supervision has improved the performance. This study is expected to greatly advance the deep learning of human key points and movement dynamics.