https://visnyk.vntu.edu.ua/index.php/visnyk/issue/feedVisnyk of Vinnytsia Politechnical Institute2025-07-31T15:48:38+03:00Голубєва Валентина Тадеушівнаvisnykvpi@gmail.comOpen Journal Systems<p>The journal “Visnyk of Vinnytsia Polytechnical Institute” is an edition which is included in the List of scientific professional editions of Ukraine in the branches of technical sciences: 121, 122, 123, 124, 125, 126, 131, 132, 133, 141, 144, 151, 152, 163, 172, 183, 275, аnd 01.05.00, 05.02.02, 05.02.10, 05.03.05, 05.09.03, 05.11.00, 05.13.05, 05.13.06, 05.12.13, 05.12.20, 05.14.02, 05.14.06, 05.22.20, 05.23.02, 05.23.05 (order of the Ministry of Education and Science of Ukraine dated 11.07.2019 р., № 975; 15.10.2019, № 1301; 17.03.2020 р., № 409), as well as F2, F3, F4, F5, F6, F7, G2, G3, G4, G5, G6, G7, G8, G9, G11, G22, J8 (in accordance with Resolution No. 1021 of the Cabinet of Ministers of Ukraine dated August 30, 2024).</p> <p>The journal is the part of the international science-based databases Index Copernicus International and Google Scholar and is referenced in the Ukrainian abstract journal Dzherelo.</p> <p>The journal publishes articles that contain new theoretical and practical results in the fields of technical, economics, natural sciences and the humanities. The reviews of the current state of development of important scientific problems, reviews of scientific and methodological conferences held at VNTU, articles on pedagogy of higher education are also published.</p>https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3277Development of Innovation Potential of VNTU on the Basis of the Startup School “Vinnytsia Challenge” (to the Tenth Anniversary of the School)2025-07-31T15:48:38+03:00R. N. Kvyetnyyrkvetny@vntu.edu.uaS. V. Barabanbaraban.s.v@vntu.edu.uaM. V. Barabanbaraban87@gmail.comV. V. Harmashvv2211@ukr.netV. Yu. Kotsiubynskyivkotsyubinsky@sprava.net<p>The paper presents a comprehensive analysis of the innovation potential of Vinnytsia National Technical University (VNTU), focusing on the ten-year experience of the University`s startup school “Vinnytsia Challenge”— a regional center of the nationwide innovation ecosystem “Sikorsky Challenge Ukraine.” The study investigates the integration of practical entrepreneurial experience into higher education and research activities. A structured model for effective interaction between academia, science, and business within the University`s innovation ecosystem is proposed, emphasizing the strategic role of startups in enhancing institutional innovation capacity.</p> <p>The development of the startup school is traced across four key stages: initial formation (2015—2017), rapid expansion (2018—2020), international recognition (2021—2023), and sustainable integration (2024–present). The school has incubated dozens of high-impact projects in critical domains such as defense, healthcare, ecology, and information technology. Its educational framework includes over 130 hours of training in innovation management, startup marketing, artificial intelligence applications, financial planning, and pitching, supported by both local and international experts and mentors.</p> <p>Prominent outcomes include award-winning startups such as SolarInt (renewable energy storage), BrightBrille (AI-powered accessibility tools), and Escadron (remote-operated battlefield transport). The study outlines the financial architecture supporting the school, including state grants, municipal funding, and strategic investments from private Israeli firms.</p> <p>Key institutional challenges—such as limited early-stage funding, weak commercialization pipelines, and insufficient faculty engagement — are critically analyzed, and evidence-based solutions are offered. These include the establishment of university-level venture funds, incentivization frameworks for faculty, and enhanced access to global startup networks.</p> <p>The findings underscore the replicability of the “Vinnytsia Challenge” model across other Ukrainian universities, advocating the formation of regional startup hubs and methodical dissemination of best practices. The experience accumulated by VNTU demonstrates a scalable approach to cultivating innovation-driven human capital and strengthening the national ecosystem during post-war reconstruction and digital transformation.</p> <p>The study's insights offer a strategic foundation for aligning academic institutions with market-oriented innovation processes, thereby enabling universities to act as engines of economic resilience and technological advancement.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3273Modeling the Process of Controlling the Dynamics of a Longitudinal Cutting Mill2025-07-31T14:41:11+03:00I. G. Grabarivan-grabar@ukr.netO. Ye. Zhukovskyiolexandr.zhukovsky@encon.com.ua<p>The paper considers the issues of paper rewinding processes and compliance with technological parameters to ensure the quality of this process. A review of the components of the modern elemental mechanical and electronic base for the design and creation of machines that perform the rewinding processes of various materials and paper in particular is carried out. Attention is paid to the specific features of using the programmable logic controllers and electric drives for the design of rewinding machines.</p> <p>The kinetics of electrical (voltage and current) and mechanical (rotor angular velocity and shaft torque) of the PRS-501 machine tool were experimentally investigated in the transient technological processes of acceleration and braking when switching from refueling to operating speed</p> <p>Based on the previously obtained dynamic model, a control system is proposed, developed on the basis of SIEMENS SIMATIC S7-300 and SIEMENS SINAMICS DCM electric drive. High reliability of the correlation of angular velocity from armature voltage (R<sup>2</sup> = 0.8588) and shaft torque from consumption current (R<sup>2</sup> = 0.8337) in acceleration mode was revealed. In braking mode, respectively, R<sup>2</sup> = 0.9117 and R<sup>2</sup> = 0.8865.</p> <p>The mechanical parameters of the strength of the paper being rewound and the dependence between the strength index and the mechanical parameters of the unwinding unit to maintain the tension specified for high-quality rewinding were investigated. Statistical data on the strength index in the longitudinal and transverse directions for one grade of paper were analyzed.</p> <p>A tension control system for the unwinding motor based on a PID controller was proposed. The mechanical properties of the paper in the longitudinal and transverse directions were studied and show that the basic strength condition σ<sub>techn</sub>< [σ], and, respectively M<sub>t</sub> < [M] is fulfilled in the entire range of their variation. The control system proposes a concept of limiting the operating torque to ensure tension instead of direct torque control to ensure a safe rewinding process in the event of a web break.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3274The Nissan Motor Oil 10W–30 Lubricating Properties Experimental Studies by the Adsorbed Film Strength Criterion2025-07-31T14:51:06+03:00O. V. Orysenkooleksandr.orysenko@gmail.comA. I. Kryvorotanatoliikryvorot@gmail.comM. V. Shapovalnvshapoval75@ukr.netM. O. Skorykmaxym.skoryk@gmail.com<p>The article examines current issues related to assessing the lubricating properties of engine oil depending on operating temperature conditions and the degree of oil degradation caused by vehicle mileage. The paper presents the results of experimental research aimed at determining the rupture force of the oil film adsorbed on the friction surfaces of Nissan Motor Oil 10w–30.</p> <p>For this purpose, a specialized tribological testing machine was developed to simulate conditions that closely resemble the real operating environment of components such as the valve train, rocker arms, and camshaft. The experiments considered two varying factors: engine oil temperature and vehicle mileage. The obtained data were processed using mathematical statistics methods, resulting in regression equations in both coded and natural form, describing the dependence of oil film rupture force on the studied parameters.</p> <p>It was found that the rupture force of the engine oil film decreases with increasing temperature, while the highest values of this parameter are observed at approximately 2500 km of mileage. It was also revealed that slightly used oil may exhibit better lubricating properties than new oil due to the accumulation of combustion products with increased dipolarity, which enhances its adsorption ability on metallic surfaces.</p> <p>Based on the research carried out, practical recommendations have been proposed for optimizing engine oil replacement intervals. These include adhering to the manufacturer’s service schedule and mixing a small amount of used oil with new oil to improve lubricating performance. The results can be used to improve diagnostic methods for assessing the operational condition of internal combustion engines.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3275Research of Entropy Estimates of Diagnostic Characteristics of Technological Systems Based on Probability Distributions2025-07-31T15:16:33+03:00S. V. Kovalevskyykovalevskii61@gmail.comV. Ya. Poberezhetsvladpoberezhets@gmail.com<p>In the study, a combination of theoretical fundamentals and practical experiments is carried out to substantiate a universal information-based approach to the diagnosis of technological systems. At the outset of the study, the authors invoke the fundamental concepts of Shannon entropy as a measure of uncertainty, explaining its role in the quantitative assessment of the informational richness of diagnostic signals. The central premise is that traditional statistical metrics — such as the mean or variance — do not always allow for a comprehensive description of complex structural changes in materials, especially under conditions of magneto-resonant processing. It is for this reason that an approach was developed enabling the comparison of different probabilistic models on a single informational scale. The methodology is presented in detail, involving the construction of a unified, normalized parameter space: all distributions are aligned according to common characteristics of mean value and dispersion, ensuring an impartial comparison. Using five materials — steel, copper, duralumin, textolite, and acrylic glass — as examples, the authors model spectral responses to broadband vibrational excitation in a constant magnetic field. For each sample, amplitude-frequency characteristics are obtained, which make it possible to identify the unique “spectral fingerprints” of the materials. The high degree of order in the steel response, manifested by a pronounced resonance peak, contrasts with the nearly uniform spectrum of the dielectrics, indicating different interaction mechanisms between broadband vibrations and the magnetic field in these materials. The results demonstrate that spectral entropy is a sensitive indicator of structural changes: a decrease in its value correlates with an increase in material ordering, whereas an elevated entropy indicates an even distribution of energy across frequencies. Based on these findings, practical recommendations are formulated for selecting optimal statistical models for different classes of materials: for example, for ferromagnetic metals it is advisable to use distributions with heavier tails, while for non-metallic materials models with maximal informational uncertainty are preferred. The proposed approach opens new prospects for information-oriented diagnostics and non-destructive testing, contributing to enhanced reliability and efficiency of technological processes in mechanical engineering and materials science.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3276Vulnerabilities and cybersecurity of Starlink2025-07-31T15:31:50+03:00D. M. Rozenvasserdenysrozenvasser@gmail.comV. V. PedyashPedyash_journal@gmail.comO. P. RusuOPRusu_journal@gmail.comYu. O. StrelkovskaYuOStrelkovska_journal@gmail.com<p><em>The article is devoted to a comprehensive analysis of vulnerabilities and cybersecurity problems of the Starlink satellite network, which provides global access to the Internet through a network of low-orbit satellites. The technological aspects of the system’s functioning are considered, including the network architecture, data transmission principles, as well as the interaction between satellites, ground stations and user terminals. The key components of the system that can become objects of potential cyberattacks are outlined, in particular satellites, ground gateways and client terminals. Particular attention is paid to the analysis of the main threats, such as signal interception, DDoS attacks, unauthorized access to data, signal jamming and physical failures in the operation of satellites. The possible consequences of such attacks for private users, businesses, military infrastructure and the global communications network are investigated. The current state of Starlink cybersecurity is analysed, including the use of encryption, authentication, secure data transfer protocols, and early threat detection systems. Recommendations are provided for improving cybersecurity, including the implementation of dynamic encryption mechanisms, increasing network resilience to external attacks, improving incident response strategies, and integrating advanced technologies such as quantum cryptography and zero-trust networks. The overall level of system security is assessed, including attack resilience indicators, incident response time, and network availability. Promising directions for reducing risks and ensuring stable operation of the Starlink infrastructure in the face of modern cyber threats are outlined. The level of system security is assessed based on numerical indicators, including resistance to external threats and the ability to recover from attacks.</em></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3259Controller of Heat Point Automation with Microclimate Parameter Control Subsystems under Hybrid Power Supply Conditions in a Greenhouse Complex2025-07-30T16:56:44+03:00S. M. Yakymetsysm.krnu.et@gmail.comV. V. Gladkyivv_gl@ukr.netV. Yu. Nozhenkonozhenkovika@gmail.comA. V. Tkalychtkalychandrey@gmail.com<p>This paper focuses on improving the energy efficiency of greenhouse farming in the context of modern challenges related to decarbonization and the implementation of renewable energy sources. The study examines technologies for intelligent greenhouse microclimate control, including sensor systems, automated control algorithms, and hybrid power supply schemes based on solar energy. The authors propose innovative solutions to enhance the energy autonomy of greenhouse complexes, which contribute to reducing operational costs and ensuring the resilience of agricultural production. Special attention is paid to the development of hybrid power supply systems that combine conventional grid sources with renewable energy (solar panels). The proposed model allows the sale of surplus electricity to the grid, further reducing operational expenses and increasing the financial efficiency of the project. The study explores the energy autonomy and reliability of the system by integrating different power sources. Generalized structural diagram of the system is presented, incorporating key energy supply and microclimate management units. Detailed description of the automation controller functional scheme is provided, which ensures efficient energy distribution, microclimate stability, and minimizes human intervention. The analysis of the energy consumption of key greenhouse systems (heating, cooling, and lighting) under various microclimate control methods was conducted. The results indicate that the introduction of intelligent regulation systems significantly reduces energy consumption: by 30…40 % when using smart regulation algorithms; by 50…60 % with the implementation of comprehensive IoT solutions based on neural networks and genetic algorithms. The proposed microclimate control subsystem includes sensor blocks, a microcontroller module, actuator control units, and signal processing and conversion modules. A detailed description of each functional component, its role, and interactions is provided. The research also includes the economic analysis of the payback period for the proposed solutions. It was determined that transitioning from traditional management methods to intelligent and IoT-based systems can reduce electricity costs by up to 60 % and ensure a payback period within 0.3…1.5 years, depending on implementation complexity. The results of this study can be used to modernize greenhouse complexes, optimize energy consumption in the agricultural sector, and promote the sustainable development of greenhouse farming.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3260CFD Modeling of the Acceptability of Using Pellets Common on the Ukrainian Market as Fuel for a Pellet Boiler with Retort-Type Burner2025-07-30T17:27:18+03:00A. Yu. Rachуnskyіarturrachinskiy@gmail.comV. V. BaranyukAleksandrW@i.uaO. O. Pikeninpikenin.work@gmail.comA. I. Bordiianbordyanartem@gmail.com<p>SET boilers are designed for heat supply of residential buildings, municipal and industrial buildings, equipped with heating systems with natural or forced circulation of water as a coolant. Sketch drawings and some thermal hydraulic characteristics of these boilers are presented in the open access. The developer website states that the boilers operate with automated loading of pellets from wood waste. The developers emphasize that it is possible to use other types of solid fuel with a change in the technical characteristics and service life of the boiler.</p> <p>In order not to conduct a costly experiment with existing equipment, the authors developed a CFD (Computation Fluid Dynamics) model of the processes taking place in a SET water-heating boiler with a capacity of 25 kW when burning various types of pellets using modern ANSYS-Fluent software package. The calculations were performed using the academic license of the ANSYS Student software package. This license is completely free (since 2015) and is intended for solving introductory and educational tasks in an academic environment. When modeling the processes that occur during the combustion of pellets as a fuel, the authors conducted a simulation of the thermal hydraulic characteristics of the flow of a continuous gas phase during its interaction with a discrete phase in the form of pellet granules. The results of CFD modeling were verified according to the passport data presented by the developer of the SET-25 boiler for the case of burning wood pellets. It was shown that the discrepancy of the results does not exceed 2%, therefore the developed model can be used to simulate the combustion of other common types of pellets - from coniferous wood, rapeseed, from bird droppings and straw, as well as from bottom sludge. The results obtained can be useful when designing new designs of water heating boilers and verifying the calculation of the existing ones.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3261Specific Features of Calculation of Medium-Power Wood Chips Water Heating Boilers2025-07-30T18:09:46+03:00L. A. BodnarBodnar06@ukr.netD. V. Stepanovstepanovdv@ukr.net<p>The paper studies the performance of a 1800 kW water heating boiler for burning wood chips. Literature information on methods for calculating medium-power water heating boilers using solid biofuels is analyzed. It is noted that the introduction of solid biofuel boilers will contribute to the replacement of fossil fuels, the renewal of technological parks of existing equipment, the development of the production of new equipment, and its installation and maintenance activities. It is shown that in modern literature there is very little information on the methods for calculating medium-power boilers on solid biomass, and the available information is not systematized and not generalized. It is shown that in the Normative Method for Thermal Calculation of Boiler Units there is no information on the features of calculating boilers on solid biomass, namely: there are no recommendations for calculating heat exchange in the boiler furnace, in a heat exchanger with intensified heat exchange for boilers on solid biomass. The influence of the ash removal fraction, excess air coefficient, wood moisture, and Aash coefficient on the temperature of flue gases at the boiler outlet and compared with operating data were investigated. The features of calculating a wood chip water heating boiler were proposed. The calculated gas temperature indicators at the boiler outlet were compared with operating data. It was shown that there is little information in the literature on the recommended values of the thermal stress of the combustion mirror for modern methods of burning solid plant biomass. It was indicated that the study of solid biomass combustion processes and the development of calculation methods, the formation of a database on the coefficients required for calculation are quite relevant tasks.</p> <p>It was indicated that at this stage of research, the method for calculating heat transfer in the boiler furnace, given in the normative method of thermal calculation of boiler units, should be used for calculating medium-power water heating boilers. Despite the simplifications incorporated in the mathematical model, the results of calculations and operating data do not differ significantly.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3262Techno-Economic Optimization of Heat Pump-Based Heating Systems Utilizing Heat Extraction from Watercourses2025-07-30T18:18:59+03:00N. M. SlobodianNSlobodian61@gmail.comО. І. Obodianskaolha.obodyanska@i.ua V. О. Goncharuksanderlend@ukr.net<p>The paper examines the techno-economic aspects of implementing heat pump units (HPUs) in heating systems based on the utilization of low-grade heat (LGH) from natural water sources, particularly river watercourses. Heat pump systems can efficiently use natural resources to meet energy demands, which is a crucial factor in reducing energy consumption and minimizing environmental impact. The advantages of employing a closed-loop heat extraction system with a circulating antifreeze heat transfer fluid are substantiated, ensuring reliable system operation during winter and reducing the risk of ice formation in heat exchangers.</p> <p>Several design configurations of heat exchangers for extracting heat from aquatic environments are presented. These configurations enhance heat transfer efficiency and reduce hydraulic losses in the system. The study also analyzes the drawbacks of conventional bottom collectors made from polyethylene pipes, including their high material consumption, complex installation, and susceptibility to clogging, which ultimately reduces overall system efficiency. As an alternative, the use of tubular grates oriented perpendicular to the flow direction is proposed. This approach improves heat exchange, reduces hydraulic resistance, and enhances heat extraction efficiency by increasing the contact surface area between the heat transfer fluid and the water environment. Such a solution significantly reduces the system’s operational costs and ensures a more stable and reliable heating process.</p> <p>The research addresses key parameters such as hydraulic resistance, material costs, optimal heat transfer fluid selection, and other operational characteristics that impact the overall economic efficiency of the system. A techno-economic optimization problem for the heat exchanger design is formulated, considering two variables — pipe diameter and total pipe length. The optimization criterion is the minimization of the system’s payback period compared to a baseline option of direct electric heating. Mathematical model is developed to determine economically feasible design parameters for the heat exchanger, accounting for HPU performance, capital investment, and potential energy savings. Conclusions are drawn regarding the potential for widespread adoption of river-sourced heat pump systems in sustainable heating practices, highlighting their capacity to reduce energy costs and environmental impact.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3263Method of Determination of Voltage Dip Source Location in Relation to the Border of Balance Responsibility2025-07-30T18:33:21+03:00A. V. Voloshko1 avolosko820@gmail.comYe. V. Kozlovskyieugene.kozlovskiy@gmail.com<p>The uncertainty of voltage dips source location on the border of balance responsibility complicates the timely response of maintenance personnel to faults in the electrical network and also leads to frequent legal disputes between electricity consumers and suppliers. The determination of the fault direction—towards the consumer or the electricity supplier—is carried out by analyzing the negative-sequence voltage on the power line and the negative-sequence current flowing through that line, followed by comparing their relative phase angles. It is important to note that the angular characteristics of voltage dips within an internal network (e.g., within a facility) may significantly differ from those in external networks due to the heterogeneity of networks with different nominal voltage levels. This can affect the angular characteristics and lead to substantial errors. Research also shows that determining torque for the conventional negative-sequence directional element has several limitations: the negative-sequence voltage is inversely proportional to the source power; the fault impedance reduces the level of the negative-sequence current; and the values of the traditional directional torque directly depend on the magnitude of the negative-sequence voltage and current, which limits the ability to accurately determine the fault direction. This work presents the method for identifying the location of a voltage dip relative to the boundary of balance responsibility in a three-phase electrical network by developing a directional element based on the total negative-sequence impedance. In this approach, the total negative-sequence impedance is always negative for forward faults and always positive for reverse faults. The effectiveness of this method for fault direction detection was verified through simulation of a type B voltage dip. The results of the simulation demonstrated acceptable accuracy.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3264Optimization of the Integration of Electrochemical Energy Storage to Improve the Energy Efficiency of Distribution Networks2025-07-30T18:43:13+03:00V. V. Kulykkulyk.v.v@vntu.edu.uV. V. Teptiateptyavira@gmail.com<p>The paper investigates the process of placing electrochemical energy storage (EES) in the primary networks of distribution system operators (DSOs). They are used to reduce peak loads on power grids and, as a result, reduce costs for purchasing electricity on the energy market, decrease electricity losses and improve voltage quality. It is shown that the optimization of EE connection points, their capacity and maximum charge/discharge power is associated with algorithmic difficulties. Due to the complexity of the efficiency indicator, it is necessary to apply complex optimality criteria and take into account active constraints. In addition, the investment climate in Ukraine, trends in the development of distribution networks and market mechanisms cause uncertainty in decision-making. The paper proposes a formalized formulation of the problem of optimizing the integration of EEs into distribution networks, and also develops a method for solving it. The obtained solutions increase the efficiency of planning investments in the development of energy storage systems, in particular, taking into account technical limitations on the part of DSOs, which contributes to their effective interaction with storage system operators (SSOs). To simplify the problem formulation, its decomposition was used, and to solve it, the method of ideal current distribution (by electricity losses). The results of the study show that this optimization problem can be reduced to a simpler problem — calculating current distribution in an equivalent circuit of electrical networks with active resistances. Economic factors are taken into account by introducing fictitious resistances into the equivalent circuit. Such an optimization algorithm is characterized by a smaller number of calculations and high reliability of obtaining a solution close to the extreme. Taking into account trends in pricing, consumption and generation of electricity over long periods contributes to the formation of justified design decisions for the integration of ENE into distribution networks.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3255Method for Determining Optimal Configurations of Gas Metering Nodes and Gas Flow Measurement Points under Conditions of Uncertainty2025-07-30T15:37:20+03:00M. I. Gorbiychukmi_profgorb@ukr.netO. A. Skripkaskripkaoleksandr2020@gmail.com<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US" style="background: white;">The authors propose a new method for determining the configuration of natural gas metering nodes and gas flow measurement points, which takes into account, in addition to traditional approaches to the solution of this problem based on data on the technical and metrological characteristics of measuring instruments, also the criterion of the total cost of measuring devices under conditions of uncertainty.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US" style="background: white;">The measuring devices considered in the article are variable pressure differential flow meters, turbine gas meters, and ultrasonic flow meters, which are most often used in the construction, repair, or reconstruction of gas metering units and gas flow measurement points. The problem of selecting the optimal configuration of a natural gas metering unit is formulated, provided that the parameters in the optimization problem, which are caused by measurement errors, are treated as fuzzy numbers of the (L-R)-type. It is shown that the initial problem is transformed into an integer programming problem in which the constraint coefficients determining the unaccounted losses depend on the parameters of the triangular membership function, which is approximated by a Gaussian function. The problem of determining the optimal number of measuring devices for natural gas metering units or gas flow measurement points, which is formulated in this work, is solved using software in the MatLab environment. The effectiveness of the developed optimization problem software was tested on a specific example of choosing the optimal configuration of the metering node based on the criterion of minimal financial costs for equipping the metering node with natural gas metering devices for one of the gas distribution stations. A comparative analysis of two variants of the optimization problem was carried out: in a deterministic formulation and taking into account the fuzziness of the coefficients, which are caused by the measurement errors, within the limitation caused by unaccounted losses</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US" style="background: white;">To verify the operation of the proposed method and the developed software, the initial data was used, which takes into account the real operating conditions of gas metering units and gas flow measurement points. The use of the proposed method will reduce the costs of construction, repair or reconstruction of gas metering units and gas flow measurement points.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3266Information Technology of the Fourier-Integral Identification Method Implementation for Recovery of Input Signals of Information and Measurement Systems2025-07-31T10:13:45+03:00O. O. Voitsekhovskaolgav1085@gmail.comB. I. Mokinborys.mokin@gmail.comO. B. Mokinabmokin@gmail.com<p>Information technology for implementing the Fourier integral identification method for restoring input signals of information and measuring systems from their output signals, created in the 1980s by B. I. Mokin and generalized by O. B. Mokin, has been developed. The developed information technology is based on a computer program created in the Python language. The first part of this Python program restores the input signal of the information and measuring system from its output signal in the form of a truncated Fourier series, which causes the appearance of drops on the graph of this restored input signal, due to the finite number of restored harmonic components. The second part of the Python program transforms the truncated Fourier series into a Fourier series with an infinite number of harmonic components, i.e., it forms an equivalent model of the input series, cleaned of the differences caused by the finite number of restored harmonic components. With appropriate justification, the transformation of the truncated Fourier series into a Fourier series with an infinite number of harmonic components was carried out using a nonlinear variant of the least squares method. In the process of implementing the Python program, it was demonstrated how the dynamic characteristics of information and measuring systems affect the output signals of these systems, which was confirmed both by calculations and graphically, and is a confirmation of the need to supplement information and measuring systems with information technologies for restoring their input signals from their output signals with the structure proposed in this article. The results obtained were analyzed and solutions were proposed, according to which the structure of the proposed information technology and the structure of the Python program, which is the basis of this information technology, can be brought into line with other conditions for the functioning of information and measuring system, the input signal of which is restored from its output signal.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3265Automatic Knowledge Extraction from Environmental Reports with Reference to Time and Spatial Coordinates of Water Bodies2025-07-30T18:52:27+03:00K. O. Bondalietovbondaletov.k@gmail.comV. B. Mokinvbmokin@gmail.comI. M. Shtelmakhigor.shtelmakh@vntu.edu.uaO. V. Slobodianiukolenas8@gmail.com<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US">The paper presents a new method for automatically extracting environmental knowledge from reports and news texts related to facts about the state of river waters or their pollution. Knowledge extraction is carried out taking into account the binding of the obtained facts to the spatial coordinates of specific water bodies and time intervals. The relevance of the work is due to the significant availability of such environmental data in the news, websites of institutions, and social media, and the need for their quick and accurate processing. The proposed method combines the detection of facts about the state of waters or their pollution, recognition of geographical names from the text and headlines, as well as the determination of time features by analyzing the hierarchical structure of the document. The method optimizes the contextual-semantic criterion, which maximizes the completeness and probability of detecting all existing connections between key phrases in the text of facts, time periods and water bodies and, at the same time, minimizes the number of false positive connections between them, by formalizing the connections in the form of “subject–predicate–object” (SPO) triplets and using the Jaccard measure to find the degree of similarity between the lists of key phrases that characterize these facts and water bodies. Knowledge extraction is based on identifying and using the hierarchical structure of the document, using large language models, and actualization the knowledge base with information with Retrieval-Augmented Generation (RAG) for regular knowledge update and binding to the time intervals and spatial coordinates. The result is a structured knowledge base in the form of “fact – water body – time interval” triplets, which can be used to analyze the dynamics of water status, identify trends, and make management decisions to improve the state of surface waters.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The result of applying the proposed method is presented using the example of the annual report on the activities of the Southern Booh River Basin Water Resources Management for 2019, which illustrates its efficiency.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3267Information Technology of Object Recognition and Localization Based on Weak Supervised Learning: Overview of Problems and Methods2025-07-31T10:26:20+03:00V. Ye. Zelenyivladyslavzelenyi@gmail.comA. V. Kozlovskyiakozlovskyi@vntu.edu.ua<p class="eng"><span lang="EN-US">In modern period , marked by the exponential growth of digital data and computing resources, the search for reliable object recognition and localization systems has become an increasingly important task in many fields, covering industrial automation, healthcare automation, environmental monitoring, etc. Traditionally, the development of such a system has relied heavily on the acquisition and processing of large datasets annotated with ground-truth labels, a labor-intensive and costly manual process. However, the paradigmatic form of weakly supervised learning (WSL) has catalyzed a profound transformation in this landscape, offering a compelling alternative way by which machine learning models can be trained on less precise or ambiguous forms of supervision.</span></p> <p class="eng" style="margin-top: 0cm;"><span lang="EN-US">Abandoning the strict view inherent in WSL not only eases the burdensome annotation process, but also extends the scope of machine learning techniques to scenarios where obtaining accurate annotations is impractical, too expensive, or simply impossible. This shift in perspective has sparked a renaissance in information technology research and innovation, sparking a surge of interest and investment in harnessing the resulting potential of weak signals observed to enhance object recognition and localization capabilities.</span></p> <p class="eng" style="margin-top: 0cm;"><span lang="EN-US">The evolution of WSL in IT heralds a paradigm shift in how we design, develop, and deploy intelligent systems across a wide range of real-world applications. By enabling machines to acquire meaningful information about imperfect or incomplete surveillance signals, WSL not only enables object recognition and localization system efficiency and scalability, but also improves adaptability and resilience to the shape of landscape data and evolving application areas. Thus, the convergence of WSL and IT is poised to revolutionize the very fabric of modern computing, ushering in an era augmented by unprecedented opportunities, possibilities, and opportunities for innovation and discovery.</span></p> <p class="eng" style="margin-top: 0cm;"><span lang="EN-US">In the field of unsupervised learning for object recognition and localization, several current challenges persist that hinder its effectiveness and adoption. Ambiguous and noisy weak signals observed often hamper the performance of the models, which reduces the accuracy of localization scale and difficulty. In addition, the semantic gap and conceptual drift create significant obstacles, affecting the adaptability and relevance of WSL models over time. Ethical and societal research, including equity and transparency issues, will further complement the framework for deploying WSL in real-world applications. Solving these problems requires improved robustness to noisy signals, improved localization accuracy, scalability, generalizability, and ethical considerations. By addressing these issues, WSL can reach its full potential and pave the way for more reliable and ethically sound intelligent systems. The article considers the prospects for further research in the field of weakly controlled learning.</span></p> <p class="eng" style="margin-top: 0cm;"><span lang="EN-US">An overview of current approaches to object recognition and localization based on weakly supervised learning (WSL) is presented. Key challenges of WSL – limited annotations, coarse labels, and data noise – are analyzed, and an integrated approach for addressing these issues is described. The proposed approach combines improved data preprocessing, adaptive loss functions accounting for uncertainty, data augmentation, integration of domain-specific knowledge, and self-training strategies. The novelty of this combination is substantiated, and a theoretical possibility of at least 0.1% improvement in model quality over known solutions is shown. A comparative analysis of existing methods (including the state-of-the-art SAM segmentation model) is provided, highlighting the advantages of the proposed approach.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3268Application of Machine Learning for Target Audience Clustering in Web Applications2025-07-31T10:36:50+03:00I. V. Pikhiryna.v.pikh@lpnu.uaYu. Yu. Merenychmerenich.julian@uzhnu.edu.ua<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US">In this study, a machine learning method was applied for clustering data on the target audience of e-commerce web applications. Machine learning is a powerful analytical tool that enables the automatic identification of patterns in large datasets, improving the accuracy of user behavior prediction. Key interaction metrics with web applications were selected, including bounce rate, session duration, and conversion rate. The input data were normalized. To ensure proper normalization and the correct operation of machine learning algorithms, a method was used to scale values within the range from zero to one. The optimal number of clusters was determined using the "elbow" method, which analyzes the relationship between the number of clusters and the within-cluster sum of squared distances. The k-means method was applied to analyze behavioral parameters, minimizing the sum of squared distances between data points and cluster centroids using the Euclidean metric. The results were visualized using a three-dimensional plot, representing the distribution of clusters based on the analyzed parameters.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The clustering results identified four groups of users with different interaction characteristics with the web resource. Users in the first cluster exhibited low engagement, short session durations, and high bounce rates, indicating insufficient content relevance. The second cluster demonstrated prolonged interaction with the web resource, but the high bounce rate may suggest navigation difficulties. The third cluster was characterized by a high conversion rate with moderate session duration, indicating an efficient user experience. The last cluster had the lowest bounce rate and the highest conversion rate, reflecting a strong alignment between content and user needs.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The practical significance of the obtained results lies in the possibility of applying clustering methods to adapt UX/UI solutions, optimize content, and enhance conversion rates. The proposed approach can be utilized in e-commerce, digital marketing, and web analytics to improve user interaction strategies.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3269Analysis of Methods and Tools of Proactive Defense against Deepfake2025-07-31T11:11:02+03:00M. B. Marchukdzgamech@gmail.comV. V. Lukichovlukichov.vitalyi@vntu.edu.ua<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US">The development of artificial intelligence has been one of the main trends in recent years. Although this technology <span style="letter-spacing: .1pt;">serves for the benefit of humanity, it can also be used for malicious purposes, such as spreading disinformation or blackmail. For the realization of the above-mentioned purposes, technologies are often used to create so-called Deepfake content.</span></span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The article describes the results of a study of methods and means for active protection against the malicious use of Deepfake. Deepfake is a general name for images, video or audio files, created by means of artificial neural networks and show how a person speaks or performs something that he did not perform in reality. As a result of using artificial intelligence such materials seem to be real for persons who do not know their origin. Without usage of special tools it will be difficult to distinguish the real image from faked one. </span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">Technologies of the content generation by means of artificial intelligence, in particular Deepfake, are often used for the creation of the materials, used for disinformation campaigns, blackmail or other malicious purposes. In this connection, there is a need to develop means for protection against malicious use of Deepfake. </span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">Passive protection methods based on the ability of machine learning models to distinguish authentic content from generated content depend on the architectures of models for generating Deepfake content, so they quickly become outdated as the latter develop more rapidly. Therefore, there is a need to develop active protection methods based on the use of watermarks to track them or to interfere with the operation of Deepfake generation models.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The paper describes and analyzes the existing active methods of protection against Deepfake, as well as analyzes the areas of their target applications, their advantages and disadvantages, technical characteristics, and suggests directions for further research in this area. Special attention is paid to the protection methods based on steganography and watermarking. The advantages of active protection methods over passive ones are considered.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3270Automated Approach for Dating English Text Using Transformer Neural Networks 2025-07-31T11:23:08+03:00M. O. Lytvynlitvinka42@gmail.comL. M. Oleshchenkooleshchenkoliubov@gmail.com<p>The paper examines the existing methods of text dating using neural networks, highlighting their advantages and limitations. Text dating is a crucial task in fields such as history, archival studies, linguistics, and forensic science, as accurately determining the creation time of a document can help verify its authenticity, establish authorship, and detect forgeries. However, traditional methods based on stylometric or statistical approaches often lack accuracy, especially when dealing with large volumes of text data. This study proposes an approach for dating English-language texts using transformer neural networks. The model achieves an accuracy of 85 % within a 30-year range for texts written between the 15th and 20th centuries, outperforming existing models applied to English text. The core idea of the proposed automated approach is to utilize transfer learning to fine-tune a pre-trained transformer neural network, optimizing it for the classification of text fragments by decade. One key advantage of this approach is the use of transformer architecture, which, through the self-attention mechanism, effectively captures complex relationships within a text. Another significant benefit is the application of transfer learning, which reduces training time and computational resources compared to training a model from scratch. The approach was implemented in Python using the transformers libraries for training and testing the neural network, datasets for working with the dataset, and numpy for the calculations. Experimental results demonstrated high accuracy: 86 % within a 30-year range and 73 % within a 20-year range on the test dataset. For the 19th and 20th centuries, the model achieved an accuracy of 89% and 90%, respectively, while accuracy for earlier centuries was lower, averaging around 30%. The research also examines the possibility of identifying features that indicate a text's association with a specific period by extracting words with the highest attention scores. Future research will focus on improving the accuracy for underrepresented historical periods by expanding and refining the dataset. Further enhancements may be achieved by optimizing model hyperparameters and experimenting with alternative neural network architectures. Another direction for future research is to explore methods for identifying linguistic or stylistic features that mark texts as belonging to a certain historical period, in order to make the neural network's results more interpretable for the user. The proposed approach has potential applications in historical research, document authentication, plagiarism detection, literary studies, and forensic analysis.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3271Algorithm for Forming a Computer Vision Model in the Interests of an Air Reconnaissance System2025-07-31T11:39:08+03:00I. M. Tupitsyaivan20081982@gmail.comB. M. IvashchukIvashchukb_journal@gmail.com Yu. P. VolkovVolkov_journal@gmail.comM. V. ParkhomenkoParkhomenko_journal@gmail.comO. H. Halepa Halepa_journal@gmail.com<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US">The significant growth of data traffic generated using unmanned aircraft systems and transmitted to the command and control station has increased the requirements for collecting and processing of aerial reconnaissance data. Main requirements include the efficiency of processing aerial monitoring data and the reliability of aerial reconnaissance data. In this regard, the issue of integrating the computer vision and artificial intelligence technologies into the process of processing intelligence information is relevant. Requirements are formed for a computer vision model in the interests of the aerial reconnaissance system, the main ones are the following: provision of automated detection and classification of objects of interest in digital images (video frames); provision of the required level of efficiency of aerial reconnaissance data processing; guaranteeing the possibility of transforming the computer vision model; ensuring the necessary level of reliability of aerial reconnaissance data in the conditions of using UAVs; taking into account the professional competencies of specialists in collecting and processing intelligence information; simplicity of algorithmic implementation; efficiency of model formation. </span></p> <p class="eng" style="margin-top: 0cm; line-height: 11.0pt;"><span lang="EN-US">The algorithm for the formation of a computer vision model is developed in the interest of the air reconnaissance system to increase the efficiency of processing air monitoring data in the conditions of providing the required level of reliability. A distinctive feature of the proposed algorithm is considering the level of operator training and computing power of the unmanned aviation complex (command and control station) to form a computer vision model. This allows to choose one of two approaches to training the model (autonomous, using the resources of open web platforms), which, in turn, allows to create conditions for increasing the efficiency of processing air monitoring data in the conditions pf providing l the required level of reliability. Further scientific research will be directed for the assessment of the the effectiveness of using the proposed approach to increase the autonomy of unmanned aviation systems in the interests of the air reconnaissance system.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3272Method for Assessing the Effectiveness of Knowledge Management Using Modern Information Systems2025-07-31T14:32:31+03:00D. O. Robotkodenys133@gmail.comO. O. Kovalenkoок@vntu.edu.ua<p class="eng" style="line-height: 10.0pt;"><span lang="EN-US">The article presents a comprehensive analysis of modern approaches to the classification of knowledge and the organization of knowledge management systems in the context of contemporary organizational functioning. Main types of knowledge—explicit and tacit—are examined, including their characteristics, differences, and the specifics of their formalization and transfer within corporate environments. Various methods for structuring knowledge are systematized according to the type of project being implemented, the sectoral specifics of the enterprise, its operational context, organizational culture, and strategic priorities.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 10.0pt;"><span lang="EN-US">Particular attention is devoted to evaluating the effectiveness of modern knowledge management tools, such as Jira, Confluence, and SharePoint, which are widely used in IT project management and corporate administration practices. It has been identified that, despite their significant functional potential, these systems exhibit limitations in several critically important aspects, such as ensuring the relevance of knowledge, integration with dynamic business processes, and support for working with tacit knowledge, which often remains outside formal information structures.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 10.0pt;"><span lang="EN-US">Within the scope of the study, a method for assessing the effectiveness of knowledge management is proposed, based on the development of a quantitative model. This model enables a formalized analysis of existing knowledge management systems (KMS), identification of their weaknesses, and determination of directions for further improvement. The proposed mathematical model for quantitative assessment considers key factors such as the degree of knowledge formalization, the level of practical knowledge usage, content relevance, the breadth of subject matter coverage, and the degree of integration with business processes.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 10.0pt;"><span lang="EN-US">The modeling results support the hypothesis regarding the limited effectiveness of existing KMS in rapidly changing organizational environments, which are characterized by high levels of uncertainty and the need for flexible and adaptive knowledge mechanisms. Based on an analytical example, it is demonstrated that traditional approaches to knowledge management do not adequately support the processes of tacit knowledge generation, transformation, and utilization.</span></p> <p class="eng" style="margin-top: 0cm; line-height: 10.0pt;"><span lang="EN-US">In conclusion, the necessity and relevance of developing a new knowledge management systems is substantiated. These systems should be more flexible, context-sensitive, equipped with built-in analytical tools, and capable of processing unstructured information while supporting work with tacit knowledge. Such systems can serve as strategic tools for ensuring organizational competitiveness in the knowledge economy.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3258Review of the Methods for Analyzing Site Infiltration of Solid Waste Disposal2025-07-30T16:36:28+03:00R. V. Petrukprrom07@gmail.comV. V. Faichukfajjchuk@gmail.com<p class="eng" style="line-height: 10.0pt;"><span lang="EN-US">Methods of analyzing the infiltrate of solid waste disposal sites are considered. It is analyzed that in this area it is most appropriate to develop systems for sorting and recycling household waste, but it should be borne in mind that in the realities of modern Ukraine the use of these systems is quite difficult to implement and their implementation may take some time, during which the existing landfills will poison the soil, surface and groundwater and atmosphere. It has been determined that in practice, it is most effective to take measures to reduce the amount of waste generated, as well as to minimize the damage from waste already disposed at landfills and dumpsites and reduce the impact on the soil and water environment. It is obvious that certain methods of infiltrate management, treatment, etc. need to be developed. However, such research should be preceded by a study of the properties of the infiltrate and its formation, because in addition to the relatively environmentally friendly components of landfill leachate, it may contain quite toxic components that cannot be reduced and, in addition to environmental damage, cause irreparable damage to human health. Since the infiltrate is a multicomponent mixture, both biological and physicochemical methods should be used for high-quality treatment. Based on the results of the study, the characterization of leachate samples by 13 indicators is proposed. The most effective methods for analyzing the infiltrate of solid waste disposal sites have been identified: pH, electrical conductivity, redox potential, salt ammonium, sulfate ions, chloride ions, Ni, Mn, Zn cations. The determining criterion for the selection of the most effective methods for analyzing the infiltrate of solid waste disposal sites was the possibility and feasibility of performing a complete analysis of the infiltrate, which affected the overall efficiency of the process.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3256Model for Selecting Window Structures in Building Design with Consideration of the Security Situation2025-07-30T15:56:22+03:00H. S. Ratushnyakratushnyak@vntu.edu.uaV. V. Pankevychpan@vntu.edu.ua О. D. Pankevychpankevich@vntu.edu.uaA. Ye. Humenchukflora.butterfly68954@gmail.com<p class="eng" style="line-height: 11.0pt;"><span lang="EN-US">The problem of choosing a window structure for the residential buildings on the basis of multicriteria analysis is considered. The analysis of mathematical methods that can be used to develop a model of multicriteria selection of window construction option is carried out. To evaluate the values of the criteria, it is proposed to use the Saaty method of pairwise comparisons (hierarchy analysis method). The task of choosing a rational variant of window construction from a set of solution options at the design stage is put forward. The stages in the process of applying the method of pairwise comparisons in the decision-making model for choosing window structures are determined. To develop a decision-making model for choosing a rational variant of a window structure, the factors and criteria influencing the decision-making process are identified and formalized. Decision-making is takes place when each alternative (window construction option) meets all the requirements for the design of window structures according to building codes. At the same time, one alternative does not prevail over the others by all criteria. The article proposes a decision-making model that allows systematic and reasonable selection of a window structure, taking into account key parameters and the security factor. The model includes five main evaluation criteria: energy efficiency, architectural attractiveness and functionality, market price, burglary protection and blast resistance. The blast resistance criterion is an innovative one for decision-making, regarding the choice of window construction. The process of evaluating the values of the criteria for the alternatives is carried out expertly on the basis of information on the values of the criteria using the Saaty pairwise comparison method. To take into account the influence of the defined criteria, the method of linear convolution of weighted partial criteria is used, which allows to formalize the selection process and choose a rational design option when any alternative is an absolute leader in all parameters.</span></p>2025-06-27T00:00:00+03:00Copyright (c) 2025 https://visnyk.vntu.edu.ua/index.php/visnyk/article/view/3257Universal Mobile Unit for the Preparation of Cellular Concrete and the Results of the Obtained Products Testing2025-07-30T16:20:12+03:00О. S. Vasylieva.s.vasiliev.76@gmail.comV. P. Kulailykym339@gmail.com<p>The installation for the preparation of cellular concrete refers to the technological equipment of the construction sector, which is used for the preparation of various building mixtures, including mortars and other materials.</p> <p>The advantage of the machine is its ease of use and simplicity of design. The disadvantage is that the concrete mixer has a small mixing volume. The main task is to choose equipment for preparing aerated concrete. The problem is solved by the installation of a lid that closes tightly and makes it airtight, which makes the mixer especially effective for the production of cellular concrete. At the same time, the equipment retains its versatility, which allows to use it for other building mixtures. The production of aerated concrete includes mixing of raw components, pouring the mixture into the mold, swelling, pre-hardening, cutting and final hardening. Studying the compressive strength by applying a uniform increase in the loading of the sample until its failure allows to assess its bearing capacity. Increasing the cement content in the mixture helps to increase the strength of aerated concrete, but a significant amount of cement can negatively affect its thermal insulation properties. As the density of aerated concrete is less , in case of the less load it will be destroyed, but the evenness of the surface of the cube sample is also an important factor in the experiment. In addition to standard mechanical tests, it is advisable to use non-destructive testing methods, such as ultrasonic scanning and X-ray tomography, to assess the quality of aerated concrete.</p> <p>The versatility of the design allows the mixer to work with different types of mixtures, ensuring high productivity and product quality, and also adapting to various construction needs.</p> <p>Having considered the advantages of the proposed installation for preparing cellular concrete, several key aspects can be highlighted, namely: versatility, which allows the mixer to be used for various building mixtures and makes it possible to use it not only for aerated concrete, but also for other materials, such as fiber concrete and dry building mixtures.</p>2025-06-27T00:00:00+03:00Copyright (c) 2025