American Journal of Applied Science and Technology
https://www.theusajournals.com/index.php/ajast
<p><strong>American Journal Of Applied Science And Technology (<span class="ng-scope"><span class="ng-binding ng-scope">2771-2745</span></span>)</strong></p> <p><strong>Open Access International Journal</strong></p> <p><strong>Last Submission:- 25th of Every Month</strong></p> <p><strong>Frequency: 12 Issues per Year (Monthly)</strong></p> <p> </p>Oscar Publishing Servicesen-USAmerican Journal of Applied Science and Technology2771-2745A Standardization Aligned Framework For Generative Artificial Intelligence And Sensor Fusion In Secure Digital Twin Driven Learning Ecosystems
https://www.theusajournals.com/index.php/ajast/article/view/9225
<p>The accelerating convergence of generative artificial intelligence, cyber physical systems, and digital twin technologies is redefining how learning, security, and system intelligence are conceptualized in contemporary socio technical environments. While artificial intelligence has long been positioned as a transformative force in education, the recent emergence of generative architectures and sensor driven cyber physical infrastructures has introduced unprecedented possibilities for creating adaptive, secure, and intelligent learning ecosystems. Digital twins, which are dynamic virtual representations of physical entities synchronized through real time data, now operate at the intersection of sensor fusion, probabilistic reasoning, and artificial intelligence driven inference. These developments raise profound implications not only for industrial automation and smart infrastructure but also for education systems that increasingly rely on digitally mediated environments for teaching, learning, and governance. The priority reference by M. A. Hussain, V. B. Meruga, A. K. Rajamandrapu, S. R. Varanasi, S. S. S. Valiveti and A. G. Mohapatra in IEEE Communications Standards Magazine provides a rigorous standardization aligned framework for generative AI based sensor fusion in secure digital twin ecosystems, positioning reliability, synchronization, ISO standards, and 3GPP alignment as foundational to trustworthy cyber physical systems (Hussain et al., 2026). This article builds upon that framework and extends its relevance into the domain of education and learning technologies, arguing that the future of intelligent education systems depends on the same principles of security, reliability, and interoperability that govern industrial digital twins.</p> <p>Drawing on interdisciplinary scholarship from artificial intelligence in education, online learning theory, and human centered computing, this study develops an original integrative model that situates generative AI driven digital twins as core infrastructures for next generation learning ecosystems. Through a qualitative synthesis of theoretical frameworks, policy reports, and empirical findings, the article explores how sensor fusion, probabilistic logic, and generative models can support personalized learning, adaptive assessment, and ethical governance. The methodological approach emphasizes analytical triangulation across educational technology theory, cyber physical system design, and AI ethics, enabling a robust interpretation of how secure digital twin architectures can mitigate risks associated with data misuse, algorithmic bias, and infrastructural fragility. The results demonstrate that when aligned with international standards and informed by educational theory, generative AI sensor fusion can create resilient, transparent, and learner centered digital environments that surpass the limitations of traditional learning management systems.</p> <p>The discussion advances a critical perspective on the promises and perils of embedding cyber physical intelligence in education. While proponents highlight efficiency, personalization, and scalability, critics warn against surveillance, deskilling, and epistemic opacity. By integrating the standardization aligned framework of Hussain et al. (2026) with educational scholarship such as that of Woolf (2020), Selwyn (2019), and Holmes et al. (2023), this article argues for a balanced pathway that prioritizes human agency, pedagogical integrity, and institutional accountability. Ultimately, the study contributes a theoretically grounded and policy relevant vision for secure, intelligent, and equitable digital twin driven learning ecosystems that can support the evolving needs of learners and educators in a data intensive world.</p>Elliot D. Branson
Copyright (c) 2026 Elliot D. Branson
https://creativecommons.org/licenses/by/4.0
2026-02-202026-02-206025867Single-Step Precision Programming and Intelligent Control Paradigms for Mult responsive Soft Robotic Systems in Complex Environments
https://www.theusajournals.com/index.php/ajast/article/view/8998
<p>Soft robotic systems have emerged as a transformative paradigm within robotics research, driven by their intrinsic compliance, adaptability, and safety in unstructured and human-centered environments. Unlike traditional rigid-bodied robots, soft robots exploit deformable materials, bioinspired architectures, and distributed actuation to achieve complex behaviors that are otherwise difficult to realize using classical mechanical designs. Recent advances have further accelerated this field through the convergence of soft materials science, intelligent control, artificial perception, and data-driven learning frameworks. Within this evolving landscape, precision programming of Mult responsive soft robots remains a central scientific and engineering challenge. The need to achieve predictable, repeatable, and decoupled responses across multiple stimulus such as magnetic fields, mechanical contact, and environmental constraints—has motivated novel approaches that unify material design and control logic.</p> <p>This article presents an extensive theoretical and analytical investigation into the foundations, methodologies, and implications of single-step precision programming for decoupled multiresponsive soft robotic systems, with particular emphasis on millirobot-scale platforms. Building upon recent breakthroughs in precision programming of soft millirobots (Zheng et al., 2024), the paper situates these developments within a broader scholarly context that includes bioinspired mechanoreception, flexible and endoluminal robotic systems, human–robot interaction, multi-agent learning, and intelligent sensing. Rather than treating control, perception, and embodiment as separate problems, the article advances the argument that future soft robotic intelligence must be understood as an integrated property emerging from material computation, adaptive control strategies, and environment-aware learning.</p> <p>The methodology adopted in this work is interpretive and theory-driven, synthesizing insights across robotics, intelligent systems, and design theory. Through detailed textual analysis, the paper examines how single-step programming frameworks reduce system complexity, mitigate control coupling, and enable scalable deployment of soft robots in constrained environments. The results section articulates emergent patterns and conceptual findings grounded in existing literature, highlighting how precision programming reshapes performance, reliability, and task generalization. The discussion expands these findings through critical comparison with alternative paradigms, addresses unresolved limitations, and outlines future research trajectories, including ethical, clinical, and industrial implications. By offering a deeply elaborated and publication-ready contribution, this article aims to serve as a comprehensive reference for researchers and practitioners seeking to understand and advance the next generation of intelligent soft robotic systems.</p>Dr. Jonas Feldmann
Copyright (c) 2026 Dr. Jonas Feldmann
https://creativecommons.org/licenses/by/4.0
2026-02-012026-02-0160216The Importance Of Emulsification In Preparing Basalt Yarns For Weaving
https://www.theusajournals.com/index.php/ajast/article/view/9195
<p>This research focuses on increasing the strength of basalt yarns produced as a new textile fiber in the conditions of Uzbekistan. It discusses the production of competitive, import-substituting textile fabrics with high operational properties.</p>M.M. Yo’ldashevaN.Q. Suyunboyeva
Copyright (c) 2026 M.M. Yo’ldasheva, N.Q. Suyunboyeva
https://creativecommons.org/licenses/by/4.0
2026-02-162026-02-16602465010.37547/ajast/Volume06Issue02-04Abstract Study Of Analytical Geometry
https://www.theusajournals.com/index.php/ajast/article/view/9165
<p>This article provides a rigorous exploration of the transition from classical Cartesian coordinate systems to abstract geometric frameworks. It begins by establishing the “death of the fixed origin” arguing that modern analytical geometry is better understood through the lens of Commutative Algebra and Topology rather than simple numerical plotting. The text covers three major theoretical shifts: the development of Algebraic Varieties and Coordinate Rings, the introduction of Scheme Theory by Alexander Grothendieck, and the application of Sheaf Theory to maintain global consistency in complex manifolds. By synthesizing these high-level concepts, the article demonstrates how abstract geometry serves as the underlying language for both theoretical physics (specifically String Theory) and modern data science. As well as the article is designed for an advanced undergraduate or graduate-level audience. It successfully bridges the gap between pedagogical geometry and contemporary research. A particular strength of the piece is its treatment of Hilbert’s Nullstellensatz, which it uses to prove the fundamental link between algebraic ideals and geometric shapes. The inclusion of Differential Geometry and the Metric Tensor provides a holistic view, ensuring the reader understands both the algebraic and the continuous aspects of the field.</p>Koshmuratova Gulnaza Muxtarovna
Copyright (c) 2026 Koshmuratova Gulnaza Muxtarovna
https://creativecommons.org/licenses/by/4.0
2026-02-132026-02-13602373910.37547/ajast/Volume06Issue02-03Methods For Improving The Energy Efficiency Of Industrial Wastewater Treatment Facilities
https://www.theusajournals.com/index.php/ajast/article/view/9256
<p>Industrial wastewater treatment facilities are characterized by high energy consumption, which significantly affects their operational costs and environmental performance. This study analyzes the structure of energy consumption in industrial wastewater treatment plants and identifies key areas for improving energy efficiency. Particular attention is given to aeration systems, pumping equipment, sludge energy recovery through anaerobic digestion, and the implementation of automated process control systems. Quantitative indicators for assessing energy efficiency are presented, including specific energy consumption per unit volume of treated wastewater and per unit mass of removed pollutants. A comparative analysis of conventional and energy-efficient technological solutions demonstrates that integrated modernization measures can reduce electricity consumption by 20–40% while enhancing environmental sustainability and economic performance.</p>Sativaldiev Aziz
Copyright (c) 2026 Sativaldiev Aziz
https://creativecommons.org/licenses/by/4.0
2026-02-222026-02-22602788210.37547/ajast/Volume06Issue02-07Algorithmic Prognostics and AI-Driven DevOps as a Convergent Architecture for Predictive Maintenance in Industry 4.0 Software-Intensive Systems
https://www.theusajournals.com/index.php/ajast/article/view/9124
<p>The transformation of contemporary industrial systems into software-intensive, cyber-physical, and continuously evolving infrastructures has fundamentally altered the nature of maintenance, reliability, and operational governance. Traditional predictive maintenance emerged from mechanical engineering and operations research traditions that sought to anticipate physical component failure through degradation modelling, statistical inference, and condition monitoring. In parallel, modern software engineering has undergone its own transformation through the rise of DevOps and, more recently, AI-driven DevOps, where machine learning automates deployment, monitoring, testing, and self-healing of software systems. These two trajectories, although historically separate, are increasingly converging within Industry 4.0 environments in which physical assets, software platforms, data pipelines, and organizational workflows are deeply entangled. This article develops a comprehensive theoretical and methodological framework that integrates predictive maintenance models with AI-driven DevOps architectures to conceptualize a unified paradigm of algorithmic prognostics for industrial software-intensive systems.</p> <p>Drawing on a broad range of literature on degradation modelling, Bayesian inference, Markov decision processes, neural networks, fuzzy logic, and ontology-based maintenance, the article demonstrates that predictive maintenance is no longer confined to the monitoring of physical assets but extends to the governance of entire digital–physical ecosystems. The review of AI-driven DevOps, particularly as articulated in contemporary research on intelligent automation for deployment and maintenance, provides the missing software-centric layer that enables predictive maintenance insights to be operationalized in real time within continuous delivery pipelines and autonomous system management. The study therefore positions AI-driven DevOps not merely as a software productivity tool, but as an infrastructural backbone for predictive maintenance in Industry 4.0.</p> <p>A qualitative, integrative methodology is adopted to synthesize heterogeneous scholarly traditions into a coherent analytical framework. The results of this synthesis reveal that predictive maintenance accuracy, interpretability, and organizational effectiveness are significantly enhanced when prognostic models are embedded into AI-driven DevOps feedback loops. This allows maintenance policies to be dynamically updated, validated, and deployed in the same way that modern software updates are managed. The discussion elaborates the theoretical implications of this convergence, including the redefinition of failure, reliability, and accountability in cyber-physical systems, and critically examines the risks associated with algorithmic opacity and over-automation.</p> <p>By situating predictive maintenance within a DevOps-enabled, AI-orchestrated lifecycle of continuous learning and intervention, the article offers a new conceptual foundation for understanding how Industry 4.0 organizations can achieve resilient, adaptive, and economically sustainable operations.</p>Brendan L. Ashcroft
Copyright (c) 2026 Brendan L. Ashcroft
https://creativecommons.org/licenses/by/4.0
2026-02-112026-02-116022529Methodology For Developing Students' Special Competencies In The Educational Process
https://www.theusajournals.com/index.php/ajast/article/view/9227
<p>This article discusses the methodology for developing students' special competencies using software tools in the educational process.</p>Egamberdiyev Akmal Olimjanovich
Copyright (c) 2026 Egamberdiyev Akmal Olimjanovich
https://creativecommons.org/licenses/by/4.0
2026-02-202026-02-20602687110.37547/ajast/Volume06Issue02-06Architecting Compliance-Embedded Machine Learning Pipelines for Financial and Healthcare Governance in Cloud-Native Environments
https://www.theusajournals.com/index.php/ajast/article/view/9108
<p>The accelerating deployment of machine learning systems across regulated domains such as healthcare and financial services has created an unprecedented tension between innovation velocity and compliance rigor. Cloud-native machine learning pipelines, particularly those orchestrated through managed platforms such as AWS SageMaker, enable rapid model experimentation, automated deployment, and continuous learning at scale, yet these same characteristics introduce new forms of regulatory risk, opacity, and governance complexity. Within healthcare, compliance with data protection and accountability regimes such as HIPAA requires not merely secure data handling but demonstrable, auditable control over every stage of the machine learning lifecycle, from data ingestion through model inference and archival. In financial services, parallel regulatory pressures arise from anti-fraud, consumer protection, and explainability mandates that require models to be both accurate and interpretable. Recent scholarly and industrial discourse has increasingly argued that conventional, documentation-based compliance frameworks are fundamentally inadequate for such environments, giving rise to the paradigm of compliance-as-code, in which regulatory constraints are embedded directly into computational workflows. The emergence of HIPAA-as-Code architectures for automated audit trails within AWS SageMaker pipelines represents one of the most concrete instantiations of this paradigm, demonstrating how regulatory obligations can be operationalized through infrastructure, logging, and policy enforcement layers rather than treated as external afterthoughts (European Journal of Engineering and Technology Research, 2025).</p> <p>This article develops a comprehensive theoretical and methodological analysis of compliance-embedded machine learning pipelines, situating HIPAA-as-Code within the broader evolution of MLOps, AIOps, and cloud governance. Drawing on foundational work in machine learning engineering, software engineering for machine learning, and regulatory informatics, the study articulates how automated auditability, provenance tracking, and policy-driven orchestration can transform both healthcare and financial compliance regimes (Amershi et al., 2019; Zaharia, 2018; Treveil, 2020). Through an interpretive synthesis of literature on financial fraud detection, explainable artificial intelligence, and hidden technical debt, the article argues that compliance-as-code is not merely a technical convenience but a necessary condition for trustworthy and sustainable deployment of machine learning in high-stakes domains (Ali et al., 2022; Hassija et al., 2024; Sculley, 2015).</p> <p>By integrating HIPAA-as-Code with advances in explainable AI, fraud detection, and cloud-native MLOps, this article contributes a unified vision of how regulated machine learning systems can be both innovative and accountable. It provides scholars and practitioners with a deeply elaborated conceptual foundation for designing, governing, and evaluating machine learning pipelines that are intrinsically aligned with regulatory and ethical expectations rather than perpetually at risk of violating them.</p>Dr. Adrian Volkov
Copyright (c) 2026 Dr. Adrian Volkov
https://creativecommons.org/licenses/by/4.0
2026-02-022026-02-02602713Increasing Efficiency In Textile Industry Enterprises Based On A Balanced Indicators System
https://www.theusajournals.com/index.php/ajast/article/view/9213
<p>This article discusses the theoretical and methodological foundations of using the balanced scorecard (BSC) to improve efficiency at textile enterprises. The study interprets the BSC as a strategic management tool that integrates the financial and non-financial aspects of the enterprise's activities into a single management mechanism. The article substantiates the possibilities of assessing efficiency based on the interrelationship between working with consumers, internal business processes, innovative development and financial results. It is also scientifically concluded that the introduction of the BSC in textile enterprises can improve management processes, effectively use resources and increase competitiveness.</p>Sattikulova Gulnara Akhmadkhonovna
Copyright (c) 2026 Sattikulova Gulnara Akhmadkhonovna
https://creativecommons.org/licenses/by/4.0
2026-02-182026-02-18602515710.37547/ajast/Volume06Issue02-05Automation-Driven Paradigms For Legacy System Transformation: Integrating AI-Augmented Pipelines In Quality Assurance
https://www.theusajournals.com/index.php/ajast/article/view/9183
<p>The acceleration of digital transformation across global industries has precipitated an urgent need to modernize legacy information systems. While conventional migration strategies emphasize infrastructural upgrades or manual QA interventions, the emergence of artificial intelligence (AI) has enabled the conceptualization of AI-augmented pipelines capable of automating quality assurance (QA) and system validation processes. This study provides a comprehensive examination of automation-driven methodologies for legacy system transformation, with particular emphasis on AI-integrated QA frameworks. Grounded in contemporary literature and empirical evaluations, the research delineates theoretical foundations, historical developments, and practical implications of AI adoption in software migration. Through a synthesis of prior studies and emerging technological insights, the investigation demonstrates that AI-based pipelines facilitate accelerated validation, improved accuracy, and adaptive testing capabilities, surpassing the limitations of conventional QA paradigms (Tiwari, 2025). Furthermore, the analysis explores the socio-technical challenges associated with migration, including workforce adaptation, regulatory compliance, and interoperability concerns, proposing a multidimensional blueprint for successful transformation. The findings contribute to a deeper understanding of strategic, operational, and ethical dimensions of AI-assisted QA, offering actionable guidance for researchers, practitioners, and policy-makers seeking sustainable digital evolution.</p>Dr. Emiliano Rinaldi
Copyright (c) 2026 Dr. Emiliano Rinaldi
https://creativecommons.org/licenses/by/4.0
2026-02-152026-02-156024045The Imperative Of Wisdom: Balancing Failure, Experience, And AI Limitations In Knowledge Transmission
https://www.theusajournals.com/index.php/ajast/article/view/9274
<p>Wisdom is a critical factor for effective leadership and organizational success, encompassing not only technical knowledge but also ethical judgment, situational awareness, and social responsibility. While artificial intelligence (AI) can process vast amounts of data, identify patterns, and support decision-making, it cannot replicate the depth of learning derived from human experience and reflection. This review examines the nature of wisdom, highlighting the roles of experience and failure in cultivating practical, context-sensitive, and ethically grounded decision-making. It also explores the limitations of AI in transmitting tacit knowledge, moral insight, and reflective understanding. Drawing on examples from management practice, the paper demonstrates that AI enhances efficiency and analytical capacity but cannot substitute for the nuanced judgment and human-centered learning that underpin wise leadership. The findings underscore the importance of integrating AI tools with experiential learning, reflective practices, and ethical deliberation to develop adaptive, resilient, and morally aware leaders in complex organizational environments.</p>Fayzullayeva Marifat Abduvaxob qizi
Copyright (c) 2026 Fayzullayeva Marifat Abduvaxob qizi
https://creativecommons.org/licenses/by/4.0
2026-02-232026-02-23602838810.37547/ajast/Volume06Issue02-08Machine Learning-Driven DevOps: A Unified Framework for Autonomous Software Operations
https://www.theusajournals.com/index.php/ajast/article/view/9126
<p>The accelerating complexity of modern software systems, driven by cloud native architectures, microservices, continuous integration and continuous deployment pipelines, and data intensive artificial intelligence workloads, has created a structural transformation in how software is designed, delivered, and governed. DevOps emerged as a response to this complexity by integrating development and operations into a unified lifecycle, yet traditional DevOps practices increasingly struggle to manage the scale, velocity, and uncertainty inherent in contemporary digital infrastructures. Artificial intelligence, particularly in the form of machine learning driven automation, has consequently become a central force in the evolution of DevOps into what is now widely referred to as AIOps and intelligent DevOps. This article develops a comprehensive, publication ready analysis of how AI driven automation reshapes software engineering, operations, governance, and organizational value creation, synthesizing insights from software engineering research, machine learning systems theory, enterprise architecture, and economic studies of AI adoption. Grounded in the conceptual foundations articulated by Varanasi (2025) regarding AI driven DevOps pipelines, this study integrates broader literature on data preparation, technical debt, neural architecture search, predictive maintenance, bias mitigation, and enterprise automation to construct a unified theoretical framework for intelligent DevOps ecosystems.</p> <p> </p> <p>Ultimately, this article concludes that AI driven DevOps is not simply an incremental improvement of existing practices but a foundational reconfiguration of software engineering as a discipline. By embedding learning systems into every layer of the software lifecycle, organizations move toward continuously adaptive digital infrastructures that are capable of anticipating failures, optimizing performance, and aligning technological operations with business value in real time, as articulated by Falcioni (2024) and OBrien et al. (2018). This transformation, however, requires rigorous governance, high quality data pipelines, and a rethinking of professional roles in software engineering to ensure that algorithmic intelligence remains aligned with human values and organizational objectives.</p>Frederick J. Stonebridge
Copyright (c) 2026 Frederick J. Stonebridge
https://creativecommons.org/licenses/by/4.0
2026-02-112026-02-116023036Processing Used Oils To Produce Reducer Lubricants And Studying Their Physicochemical Properties
https://www.theusajournals.com/index.php/ajast/article/view/9110
<p>Work is underway to develop experimental samples of reducer lubricating oil compositions based on locally available raw materials and secondary (recycled) materials from the chemical industry. Reducer lubricants RSMy.s.-Y (summer) and RSMy.s.-Q (winter) are semi-fluid reducer lubricants intended for use in gear transmissions of heavy machinery reducers. RSMy.s.-Y is designed for operation in summer conditions, while RSMy.s.-Q is intended for winter conditions.</p>J.Sh. BaxtiyorovS.Dj. XolikovaL.A. Ismailova S.Sh. Turabdjanova
Copyright (c) 2026 J.Sh. Baxtiyorov, S.Dj. Xolikova, L.A. Ismailova , S.Sh. Turabdjanova
https://creativecommons.org/licenses/by/4.0
2026-02-092026-02-09602142410.37547/ajast/Volume06Issue02-02A Multidimensional Framework for Business Model Innovation and Consulting-Led Transformation in Small and Medium-Sized Enterprises
https://www.theusajournals.com/index.php/ajast/article/view/9253
<p>Small and medium-sized enterprises (SMEs) remain the backbone of most national economies, yet they face disproportionate pressures arising from technological disruption, sustainability imperatives, international competition, and structural fragility. Within this context, business model innovation and professional consulting have emerged as two interdependent mechanisms through which SMEs seek resilience, growth, and strategic renewal. While existing research has extensively examined business model innovation as a source of competitive advantage and organizational performance, far less attention has been paid to the way in which structured consulting frameworks mediate, accelerate, and stabilize these transformations. This article develops an integrated theoretical and methodological model that synthesizes contemporary business model innovation literature with a complex consulting architecture tailored specifically to SME environments. Central to this synthesis is the conceptualization of consulting not merely as advisory intervention, but as a systemic governance and learning mechanism that aligns strategic intent, operational capability, and sustainability orientation over time.</p> <p>The study is grounded in an extensive interpretive analysis of existing theory and empirical patterns across innovation, sustainability, servitization, and strategic agility, with particular emphasis on the complex model of business consulting articulated by Kovalchuk (2025). That model provides a multi-layered architecture for diagnosing, designing, implementing, and stabilizing business transformations in SMEs. By embedding this consulting framework into the dominant theories of business model innovation, the article demonstrates how value creation, value capture, and stakeholder alignment can be orchestrated through professionalized advisory processes rather than left to ad hoc entrepreneurial improvisation. The research contributes a unified conceptual lens through which SME leaders, consultants, and scholars can understand business model renewal as a continuous, learning-driven, and stakeholder-oriented process.</p> <p>Methodologically, the article employs a qualitative meta-synthesis approach, combining systematic conceptual comparison with interpretive theory building. Rather than testing hypotheses, the research develops a comprehensive analytical framework that integrates business model design, strategic agility, sustainability, and consulting governance into a single explanatory system. The results reveal that SMEs that align consulting processes with business model innovation are more likely to achieve coherence between strategy, operations, digitalization, and sustainability outcomes. The discussion extends these findings by situating them within broader debates on entrepreneurial theory, dynamic capabilities, circular economy, and outcome-based value creation. The article concludes by outlining implications for SME policy, consulting practice, and future academic research.</p>Tristan A. Montrose
Copyright (c) 2026
https://creativecommons.org/licenses/by/4.0
2026-02-222026-02-226027277