Automation & Interface Design Research
Exploring the intersection of automation levels in personal finance applications and user preferences for direct manipulation versus agentic AI interfaces.
Krämer: Automation in Finance
Research on levels of automation in personal budgeting and finance apps, with an interactive Figma prototype demonstration.
Lauterbach: Interface Preferences
Analysis of factors influencing user adoption of direct manipulation versus agentic AI interfaces.
Direct Manipulation–Automation Levels in Finance Apps
Author: Krämer
Topic: Taxonomy of Automation Levels in Personal Finance Applications
Research Question
This research investigates how combinations of user control and system knowledge types influence usage autonomy, satisfaction, and financial outcomes in finance apps. The central question: What are the different direct manipulation–automation levels in finance apps currently on the market, and how can they be categorized?
Research Strategy
A systematic literature review was conducted using Google Scholar and Research Rabbit with three search terms: (1) direct manipulation vs interface agents in finance apps, (2) finance apps agents, and (3) finance apps direct manipulation. Papers were filtered through title and abstract screening, with related papers identified via citation analysis [1][2][3].
Five-Level Taxonomy
Level 0: Full Direct Manipulation
Visible controls, incremental and reversible actions, immediate feedback (sliders, filters, dynamic queries). High predictability; user remains the acting agent [2]. Example: Manual chart manipulation, point-and-click direct banking interfaces.
Level 1: Assistive Direct Manipulation
Primarily direct manipulation, augmented by visible non-executive agent functions: recommendations, highlighting, details-on-demand. Agents suggest but don't act autonomously [2][6]. Requires transparent, controllable UIs with good explainability.
Level 2: Semi-Autonomous Agents
Agents propose actions and execute with user confirmation (recommend-and-execute-with-confirmation). Adaptive but always with user-in-the-loop control [2][5]. Example: Robo-advisors that suggest portfolios and trade only after user approval.
Level 3: Autonomous Executive Agents
Agents act independently within defined policies/constraints (automatic rebalancing, simple trades). Higher invisibility possible but requires monitoring and override capabilities [3][6]. Needs strict risk alignment, real-time monitoring, and regulatory compliance.
Level 4: Multi-Agent Orchestrated Autonomy
Multi-agent orchestration with defined roles (Director, Analyst, Assistant). Dynamic routing between models, cooperative decision-making, adaptive reconfiguration, and self-improvement through evaluation [5][7]. Example: FinRobot, multi-agent stock prediction systems, orchestrated trading simulations.
User Personas
Each automation level is designed for different user preferences and needs. Meet the personas that represent each level's target audience.
Tom
41, Controller
“I want to check and record every transaction myself - without system logic.”
Full Direct Manipulation
Mira
29, Product Manager
“Show me options - but I decide what happens.”
Assistive Direct Manipulation with Suggestions
Lena
34, UX Designer
“I want to see recommendations, but I make the decisions about my finances myself.”
Semi-Autonomous Agents with Confirmation
Jonas
36, IT Consultant
“If the app acts for me, I want to be able to intervene at any time.”
Autonomous Agent-Based Execution
Sofia
38, Innovation Manager
“I want a system that knows my goals and improves itself.”
Multi-Agent Self-Optimizing Automation
Interactive Prototypes
Explore the interactive Figma prototypes demonstrating different automation levels in a personal budgeting application. Use keyboard navigation or mouse to interact with the prototypes.
Transaction Input
Saving Goal Setup
Research Limitations
The current state of research is limited by challenges in interpretability/explainability, real-time adaptation, generalizability, and scalability. These remain open research gaps that constrain the transition to higher automation levels [3][5].
References
- Pal, A., Gopi, S., & Lee, K. M. (2023). Fintech Agents: Technologies and Theories. Electronics, 12(15), 3301. https://doi.org/10.3390/electronics12153301
- Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. interface agents. Interactions, 4(6), 42–61. https://doi.org/10.1145/267505.267514
- Satyadhar, J. (2025). A Literature Review of Gen AI Agents in Financial Applications: Models and Implementations. International Journal of Science and Research (IJSR), 14(1), 1094–1100. https://dx.doi.org/10.21275/SR25125102816
- Maes, P., & Wexelblat, A. (1996). Interface agents. CHI '96: Conference Companion on Human Factors in Computing Systems, New York. https://doi.org/10.1145/257089.257377
- Yang, H., Zhang, B., Wang, N., Guo, C., Zhang, X., Lin, L., Wang, J., Zhou, T., Guan, M., Zhang, R., & Wang, C. D. (2024). FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models (2nd ed.). arXiv. https://doi.org/10.48550/arXiv.2405.14767
- Lieberman, H. (1997). Autonomous interface agents. CHI '97: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, New York. https://doi.org/10.1145/258549.258592
- Satyadhar, J. (2025). Review of autonomous systems and collaborative AI agent frameworks. International Journal of Science and Research Archive, 14(2), 961–972. https://doi.org/10.30574/ijsra.2025.14.2.0439
Direct Manipulation vs. Agentic AI Interfaces
Author: Lauterbach
Topic: Factors Influencing Adoption of Direct Manipulation vs. Agentic AI Interfaces
Research Overview
This research synthesizes findings from multiple studies to identify the key factors that influence whether users prefer direct manipulation interfaces or agentic AI-driven interfaces. The analysis reveals that preference is shaped by cultural background, trust disposition, financial literacy, cognitive load, and system design characteristics.
Factors Favoring Direct Manipulation
Autonomous Behavior Concerns
Users may feel threatened if chatbots behave too autonomously. The concept of a thinking robot that autonomously generates ideas, desires, and expresses needs is unsettling and evokes strong feelings of eeriness alongside fascination [7].
Proactive Irrelevant Information
When AI proactively provides irrelevant information, it is viewed negatively by users [7]. Users prefer systems that provide relevant, timely assistance rather than unsolicited suggestions.
Lack of Explanation
Users distrust automated aids, even reliable ones, unless an explanation for errors is provided [9]. If users are not provided with an explanation for the AI's decisions, they are less likely to use it.
Perceived Comparative Advantage
Users judge AI against their own perceived skill. If the AI cannot prove it is adding value above and beyond what the user can do themselves, the user will likely revert to manual control [9].
Cultural Values: Individualism
Individualistic cultures (USA, Northern Europe) strongly prefer direct manipulation and control, viewing AI as a tool for personal goals [1]. Low power distance (egalitarian) societies demand transparent justification from AI systems [2].
Factors Favoring Agentic AI Interfaces
Financial Knowledge & Literacy
Only individuals with an advanced level of financial literacy are more likely to be potential users of robo-advisors [4]. High financial knowledge combined with understanding the benefits of robo-advisory systems makes adoption more probable [3][4].
Cognitive Fatigue
Intensive cognitive effort degrades the capacity for self-control in economic choices, leading to systematic bias toward immediate gains [5]. Under cognitive load, users may prefer delegation to AI.
High General Trust Disposition
Participants who displayed a high level of trust in financial assistants also had a higher tendency to trust in general [7]. No significant differences were found based on age or neuroticism.
Cultural Values: Collectivism
Collectivistic cultures (China, Southeast Asia) accept interface agents and AI influence, viewing AI as a collaborative social participant [1]. High power distance (hierarchical) societies perceive legitimacy from institutional authority and extrapolate that onto algorithms [2].
Human-in-the-Loop Design
Embracing a human-in-the-loop (HITL) model allows AI agents to propose actions while humans retain final decision-making, balancing automation with oversight. This approach builds trust and enhances rather than competes with human expertise [10].
Trust Calibration
The Trust Spectrum Problem
High trust leads to over-reliance on AI systems, while low trust leads to disuse [7]. Competence is essential and has a significant impact on trust levels. Usefulness is a determining factor in intention to use AI banking assistants.
Design Recommendations
For low trust users: provide detailed information about the assistant's functions, workings, and calculations to enhance understandability and avoid the "black-box" phenomenon [7]. For high trust users: clarify possible risks to facilitate necessary trust calibration and prevent misuse of risky tools.
Key Trust Factors
The system must be familiar, reliable, and predictable. Users must understand the intention of developers, have general trust, as well as specific trust in automation [6]. Explanations of AI predictions lead to significantly better user performance [8].
Framework for Interface Selection
Based on the research synthesis, the following framework guides when each interface paradigm is preferred:
| Factor | Direct Manipulation | Agentic AI |
|---|---|---|
| Culture | Individualistic, low power distance [1][2] | Collectivistic, high power distance [1][2] |
| User Skill | Low perceived AI advantage [9] | High financial literacy + understanding benefits [3][4] |
| Transparency | No explanation provided [9] | Clear explanations, HITL model [8][10] |
| Cognitive State | Low cognitive load, time available | High cognitive fatigue [5] |
Key Insight
Whether the assistant acts more or less autonomously may not be the primary concern—what matters more is whether the system is generally perceived as useful or potentially hazardous [7]. Competence and usefulness are the key drivers of adoption, while a developer's benevolent intentions can positively impact trust calibration.
References
- Li, M., et al. (2024). Cultural differences in AI preferences. CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/10.1145/3613904.3642660
- KPMG (2024). Trust in AI: A Global Study. https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html
- Lim, K. Y., et al. (2023). Millennials and Robo-Advisory Adoption. Sustainability, 15(7), 6016. https://www.mdpi.com/2071-1050/15/7/6016
- Rossi, A., & Utkus, S. (2022). Financial literacy and robo-advising. Finance Research Letters. https://doi.org/10.1016/j.frl.2022.102835
- Blain, B., et al. (2016). Neural mechanism of cognitive fatigue. Proceedings of the National Academy of Sciences, 113(33). https://doi.org/10.1073/pnas.1520527113
- Körber, M. (2019). Theoretical considerations and development of a questionnaire to measure trust in automation. ResearchGate. https://www.researchgate.net/publication/323611886
- Wester, M., et al. (2023). Trust in AI banking assistants. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1241290
- Chromik, M., & Butz, A. (2022). Educational intervention and AI explanations. Computers in Human Behavior. https://doi.org/10.1016/j.chb.2022.107594
- Yin, M., et al. (2022). Trust and AI performance perception. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.891529
- IEEE (2024). Human-in-the-Loop AI Systems. IEEE Transactions. https://ieeexplore.ieee.org/document/11125703