Harnessing Emerging Technologies to Shape the Future | College of Arts & Sciences | J. Mack Robinson College of Business | Oral Presentation Student Center East - Room 203
Feb 05, 2025 02:15 PM - 03:00 PM(America/New_York)
20250205T1415 20250205T1500 America/New_York Session E: Harnessing Emerging Technologies to Shape the Future Student Center East - Room 203 3rd Annual Graduate Conference for Research, Scholarship, and Creative Activity grad@gsu.edu
6 attendees saved this session
Automated Service Encounter: How Robotic Agents Impact Customer Online ReviewsView Abstract
02:15 PM - 03:00 PM (America/New_York) 2025/02/05 19:15:00 UTC - 2025/02/05 20:00:00 UTC
Robotic service agents are becoming increasingly common in service-oriented industries such as hospitality (e.g., hotels and restaurants) and healthcare (e.g., hospitals), offering companies solutions for automating repetitive tasks, overcoming labor shortages, and reducing labor costs. Yet, such robotic agents’ impact on customer perceptions and feedback remains underexplored. Existing literature suggests that service delivery robots yield both positive and negative outcomes. Through three randomized lab and field experiments, we empirically investigate whether and how interactions with robotic versus human service agents influence consumers’ complaint and self-disclosure behavior, uncover the underlying mechanism, and identify the boundary conditions. Our study contributes to the service automation literature by identifying mechanisms that shape customer interactions and reactions to robotic service agents. Additionally, we highlight practical implications for deploying robotic agents efficiently, providing insights for managers to enhance customer experience and reduce negative online feedback in service operations.
Presenters Anqi Zhang
Robinson College Of Business
Co-Authors
XF
Xinyu Fu
Improving Object Detection Efficiency Using Neuromorphic Visible Light CommunicationView Abstract
02:15 PM - 03:00 PM (America/New_York) 2025/02/05 19:15:00 UTC - 2025/02/05 20:00:00 UTC
Here’s a concise summary/description of the abstract: This work focuses on improving object detection and tracking efficiency using neuromorphic computing and Visible Light Communication (VLC) with the Metavision SDK. The study leverages the SDK's Python-based inference pipeline, which uses a pre-trained TorchScript model to detect and track vehicles and pedestrians. The system processes event-based data, outputs bounding boxes, and provides confidence levels for each detected object. Dual visualization panes display detection and tracking results, showcasing the system's potential for low-power, high-speed performance. The setup involves using event-based cameras or pre-recorded RAW/DAT files, along with extensive pipeline functionalities such as geometric preprocessing, noise filtering, and data association. Parameters like inference thresholds, RoI filtering, and Non-Max Suppression (NMS-IoU) can be adjusted to optimize detection and tracking performance. Performance is influenced by factors such as lighting conditions, camera placement, and lens focus. While the Python implementation is efficient, a faster C++ alternative is also available. This integration of VLC and neuromorphic processing highlights the feasibility of real-time, energy-efficient object detection, with potential applications in smart cities, IoT systems, and autonomous technologies. The study achieves results comparable to traditional methods while reducing energy consumption and improving efficiency.
Presenters
NS
Nikhil Shrangare
Department Of Computer Science, College Of Arts And Science
Co-Authors
AM
Abbaas Alif Mohamed Nishar
Invisible Data Embedding for Reliable Screen-Camera Communication in Real-World SettingsView Abstract
02:15 PM - 03:00 PM (America/New_York) 2025/02/05 19:15:00 UTC - 2025/02/05 20:00:00 UTC
We introduce a screen-to-camera communication system that embeds data in real-world settings by leveraging temporal flicker fusion within the OKLAB color space. Through spatially adaptive flickering and encoding information in distinct pixel region shapes, Revelio achieves data embedding that is visually imperceptible yet highly resilient to noise, asynchronicity, and distortions common in screen-camera channels, allowing for reliable decoding by standard smartphone cameras. Powered by a two-stage neural network, our decoding process employs a weighted differential accumulator to enhance frame detection and symbol recognition accuracy. Early experiments showcase the potential of interactive television, providing a seamless method for transmitting meta-information without interrupting the viewing experience.
Robinson College Of Business
Department of Computer Science, College of Arts and Science
No moderator for this session!
No attendee has checked-in to this session!
Upcoming Sessions
49 visits