Phone/Fax: (495) 771-32-38
33 Kirpichnaya Ulitsa
Svetlana V. Maltseva
Deputy Head of Research and Partnerships
Deputy Head for Academic Affairs
Olga A. Tsukanova
Deputy Head for International Relations
Mikhail M. Komarov
Deputy Head for Admissions and Alumni Affairs
Vladimir Alekseevich Samodurov
This work is devoted to the investigation of particle acceleration during magnetospheric dipolarizations. A numerical model is presented
taking into account the four scenarios of plasma acceleration that can be realized: (A) total dipolarization with characteristic time scales of
3 min; (B) single peak value of the normal magnetic component Bz occurring on the time scale of less than 1 min; (C) a sequence of rapid
jumps of Bz interpreted as the passage of a chain of multiple dipolarization fronts (DFs); and (D) the simultaneous action of mechanism (C)
followed by the consequent enhancement of electric and magnetic fluctuations with the small characteristic time scale 1 s. In a frame of the
model, we have obtained and analyzed the energy spectra of four plasma populations: electrons e, protons Hþ, helium Heþ, and oxygen Oþ
ions, accelerated by the above-mentioned processes (A)–(D). It is shown that Oþ ions can be accelerated mainly due to the mechanism (A);
Hþ and Heþ ions (and to some extent electrons) can be more effectively accelerated due to the mechanism (C) than the single dipolarization
(B). It is found that high-frequency electric and magnetic fluctuations accompanying multiple DFs (D) can strongly accelerate electrons e
and really weakly influence other populations of plasma. The results of modeling demonstrated clearly the distinguishable spatial and temporal
resonance character of particle acceleration processes. The maximum particle energies depending on the scale of the magnetic acceleration
region and the value of the magnetic field are estimated. The shapes of energy spectra are discussed.
The paper deals with cyclostationarity as a natural extension of stationarity as the key property in designing the widely-used models of random processes. The comparative example of two processes, one is wide-sense stationary and the other is wide-sense cyclostationary, is given in the paper and reveals the lack of the conventional stationary description based on one-dimensional autocorrelation functions. It is shown that two significantly different random processes appear to be characterized by exactly the same autocorrelation function while their two-dimensional autocorrelation functions provide outlook where the difference between processes of two above-mentioned classes becomes much clearer. More concise representation by expanding the two-dimensional autocorrelation function to its Fourier series where the cyclic frequency appears as the transform parameter is illustrated. The closed-form expression for the components of the cyclic autocorrelation function is also given for the random process which is an infinite pulse train made of rectangular pulses with randomly varying amplitudes.
Urban greenery such as trees can effectively reduce air pollution in a natural and eco-friendly way. However, how to spatially locate and arrange greenery in an optimal way remains as a challenging task. We developed an agent-based model of air pollution dynamics to support the optimal allocation and configuration of tree clusters in a city. The Pareto optimal solutions for greenery in the city were computed using the suggested heuristic optimisation algorithm, considering the complex absorptive-diffusive interactions between agent-trees (tree clusters) and air pollutants produced by agent-enterprises (factories) and agent-vehicles (car clusters) located in the city. We applied and tested the model with empirical data in Yerevan, Armenia, and successfully found the optimal strategy under the budget constraint: planting various types of trees around kindergartens and emission sources.
Evolution on changing fitness landscapes (seascapes) is an important problem in evolutionary biology. We
consider the Moran model of finite population evolution with selection in a randomly changing, dynamic
environment. In the model, each individual has one of the two alleles, wild type or mutant. We calculate the
fixation probability by making a proper ansatz for the logarithm of fixation probabilities. This method has been
used previously to solve the analogous problem for the Wright-Fisher model. The fixation probability is related to
the solution of a third-order algebraic equation (for the logarithm of fixation probability).We consider the strong
interference of landscape fluctuations, sampling, and selection when the fixation process cannot be described by
the mean fitness. Such an effect appears if the mutant allele has a higher fitness in one landscape and a lower
fitness in another, compared with the wild type, and the product of effective population size and fitness is large.
We provide a generalization of the Kimura formula for the fixation probability that applies to these cases. When
the mutant allele has a fitness (dis-)advantage in both landscapes, the fixation probability is described by the
An age-structured bioeconomic model, which is completely continuous in age and time, is developed in order to compare with traditional discrete models. Both types have advantages and disadvantages. The continuous framework complements discrete models as it allows for deeper and more transparent analytical study and leads to analytical results that would be difficult to achieve within a discrete framework. To make the model realistic, a nonlinear recruitment function is introduced and steady state solutions and constant-effort optimal fishing are studied analytically. In addition, the framework has been used for numerical analysis. Simulations are used to investigate how optimal harvesting patterns vary with parameter values.
This paper introduces the maximum likelihood estimator (MLE) based on artificial neural network (ANN) for a fast computation of the bearing that indicates the direction to the source of the electromagnetic wave received by a passive radar system equipped with an array antenna. Authors propose the cascade scheme for ANN training phase where the network is fed with the pair-wise delays of received stationary or cyclostationary signals and the output of the network goes to the input of the target function being maximized together with the same data. The designed ANN topology has the modified output layer consisting of the custom neuron that implements argument function of a complex number rather than linear or sigmoid-like ones used in the conventional multilayer perceptron topologies. The simulation carried out for the ring array antenna shows that a single estimation obtained via ANN MLE takes 12 times less computational time comparing to the MLE implemented via the numerical optimization technique. The degradation of accuracy measured as the increase of mean-squared error does not exceed 10% of the potential value for the particular signal-to-noise ratio (SNR) and that difference has no tendency to decrease for higher SNR. The estimation error appeared to be independent from the true value in the wide range of bearings.
Mathematical modeling of a stock market functioning is one of the actual and at the same time complex task of the modern theoretical economics. From our point of view, building such mathematical models “ab initio”, by using analogy between the stock market and a certain physical system (in our work, laser), is the most promising approach. This paper proposes a simple econophysical model of stock market as an open nonequilibrium system in form of Lorenz–Haken equation. In this system, variation of ask price, variation of bid price, and instantaneous difference between numbers of agents in active and passive state are intensity of external information flow is a control parameter. This model explains the impossibility of existence of an equilibrium state of the market and shows the presence of deterministic chaos in a stock market.
In this article we aim to highlight the problems related to the structure and stability of the
comparatively thin current sheets that were relatively recently discovered by space missions in
the magnetospheres of the Earth and planets, as well as in the solar wind. These magnetoplasma
structures are universal in collisionless cosmic plasmas and can play a key role in the processes
of storage and release of energy in the space environment. The development of a self-consistent
theory for these sheets in the Earth’s magnetosphere, where they were first discovered, has a long
and dramatic history. Solution of the problem of the thin current sheet structure and stability
become possible in the framework of a kinetic quasi-adiabatic approach required to explain their
embedding and metastability properties. It was found that the structure and stability of current
structures are completely determined by the nonlinear dynamics of plasma particles. Theoretical
models have been developed to predict many properties of these structures and interpret many
experimental observations in planetary magnetospheres and the heliosphere.
This article describes the problem of analysis of social network graphs and other interacting objects. It also presents community detection algorithms in social networks and their classification and analysis. In addition, it considers applicability of algorithms for real tasks in social network graph analysis.
The primary purpose of this paper is to provide an overview of existing education solutions for IoT and develop proposals for their improvement. The study draws analysis of current conditions of the educational IoT sphere, a comparative analysis of educational products used for teaching of undergraduate students. With that the article describes the architecture of our own software and hardware platform for learning IOT. Moreover, this paper reviews methods and technical instruments employed to design software and hardware appliances.
This book contains a selection of papers accepted for the presentation and discussion at the 2018 International Conference on Digital Science (DSIC’18). This Conference had the support of the Institute of Certified Specialists, Russia, AISTI (Iberian Association for Information Systems and Technologies), and Springer. It will take place Convention Centre, Budva, Montenegro, October 19-21, 2018.
DSIC’18 is an international forum for researches and practitioners to present and discuss the most recent innovations, trends, results, experiences, and concerns in the several perspectives of Digital Science. The main idea of this Conference is that the world of science is united allowing all scientists/practitioners to be able to think, analyze, and generalize their thoughts.
DSIC aims efficiently to disseminate original research results in natural, social, art, and humanities sciences. An important characteristic feature of the Conference should be the short publication time and worldwide distribution. This Conference enables fast dissemination, so conference participants can publish their papers in print and electronic format, which is then made available worldwide and accessible by numerous researchers.
The Scientific Committee of DSIC’18 was composed of multidisciplinary group of 26 experts. One hundred and seven invited reviewers who are intimately conceded with Digital Science have had the responsibility for evaluating, in a “double-blind review” process, the papers received for each of the main themes proposed for the Conference: Digital Art and Humanities; Digital Economics; Digital Education; Digital Engineering, Digital Environmental Sciences; Digital Finance; Business and Banking; Digital Media; Digital Medicine; Pharma and Public Health; Digital Public Administration; Digital Technology and Applied Sciences.
DSIC’18 received 88 contributions from 16 countries around the world. The papers accepted for the presentation and discussion at the Conference are published by Springer (this book) and will be submitted for indexing by ISI, SCOPUS, among others.
The smart monitoring system (SMS) vision relies on the use of ICT to efficiently manage and maximize the utility of network infrastructures and services in order to improve the quality of service and network performance. Many aspects of SMS projects are dynamic data driven application system where data from sensors monitoring the system state are used to drive computations that in turn can dynamically adapt and improve the monitoring process as the complex system evolves. In this context, a research and development of new paradigm of Distributed Big Data Driven Framework (DBDF) for monitoring data in mobile network infrastructures entails the ability to dynamically incorporate more accurate information for network monitoring and controlling purposes through obtaining real-time measurements from the base stations, user demands and claims, and other sensors (for weather conditions, etc.). The proposed framework consists of network probes, data parsing application, Message-Oriented Middleware, real-time and offline data models, Big Data storage and Decision layers., and Other data sources. Each Big Data layer might be implemented using comparative analysis of the most effective Big Data solutions. In addition, as a proof of concept, the roaming users detection model was created based on Apache Spark application. The model filters streaming protocols data, deserializes it into Json format and finally sends it to Kafka application. The experiments with the model demonstrated and acknowledged the capacities of the Apache Spark in building foundation for Big Data hub as a basic application for online mobile network data processing.
In this paper, we describe a deep-learning system for emotion detection in textual conversations that participated in SemEval-2019 Task 3 “EmoContext”. We designed a specific architecture of bidirectional LSTM which allows not only to learn semantic and sentiment feature representation, but also to capture user-specific conversation features. To fine-tune word embeddings using distant supervision we additionally collected a significant amount of emotional texts. The system achieved 72.59% micro-average F1 score for emotion classes on the test dataset, thereby significantly outperforming the officially-released baseline. Word embeddings and the source code were released for the research community.
Intelligent computer systems aim to help humans in making decisions. Many practical decision-making problems are classification problems in their nature, but standard classification algorithms often not applicable since they assume balanced distribution of classes and constant misclassification costs. From this point of view, algorithms that consider the cost of decisions are essential since they are more consistent with the requirements of real life. These algorithms generate decisions that directly optimize parameters valuable for business, for example, the costs savings. But despite on practical value of cost-sensitive algorithms, the little number of works study this problem concentrating mainly on the case when the cost of a classifier error is constant and does not depend on a specific example. However, many real-world classification tasks are example-dependent cost-sensitive (ECS), where the costs of misclassification vary between examples and not only within classes. Existing methods of ECS learning include just modifications of the simplest models of machine learning (naive Bayes, logistic regression, decision tree). These models produce promising results, but there is a need for further improvement in performance that can be achieved by using gradient-based ensemble methods. To break this gap, we present the ECS generalization of AdaBoost. We study three models which differ by the ways to introduce cost into the loss function: inside the exponent, outside the exponent, and both inside and outside the exponent. The results of the experiments on three synthetic and two real datasets (bank marketing and insurance fraud) show that example-dependent cost-sensitive modifications of AdaBoost outperform other known models. Empirical results also show that critical factors influencing the choice of the model are not only the distribution of features, which is typical for cost-insensitive and class-dependent cost-sensitive problems but also the distribution of costs. Next, since the outputs of AdaBoost are not well calibrated posterior probabilities, we check three approaches to calibration of classifier scores: Platt scaling, isotonic regression, and ROC modification. The results show that calibration not only significantly improves the performance of specific ECS models but allows making better capabilities of original AdaBoost. Obtained results provide new insight regarding the behavior of the cost-sensitive model from a theoretical point of view and prove that the presented approach can significantly improve the practical design of intelligent systems.
Distinguishing outliers from normal data in wireless sensor networks has been a big challenge in the anomaly detection domain, mostly due to the nature of the anomalies, such as software or hardware failures, reading errors or malicious attacks, just to name a few. In this article, we introduce an anomaly detection-based OPF classifier in the aforementioned context. The results are compared against one-class support vector machines and multivariate Gaussian distribution. Additionally, we also propose to employ meta-heuristic optimization techniques to finetune the OPF classifier in the context of anomaly detection in wireless sensor networks.
Previous works by these authors offer the numerical methodof successive approximations for developing the solutionsof the problemof stabilization of nonlinear systems with standard functional. This paperconsiders applying this method for studying the problem with singularcontrol. It is achieved by introducing an auxiliary problem. The solutionfor the auxiliary problem provides smooth approximation tothe solutionof the initial problem. The paper presents the algorithms for constructingan approximate solution for the initial problem. It is demonstrated that,unlike direct algorithms of optimal control, these algorithms allow toregister the saturation point, thus enabling one to register and studysingular regimes.
Recently standardized millimeter-wave (mmWave) band 3GPP New Radio systems are expected to bring extraordinary rates to the air interface efficiently providing commercial-grade enhanced mobile broadband services in hotspot areas. One of the challenges of such systems is efficient offloading of the data from access points (AP) to the network infrastructure. This task is of special importance for APs installed in remote areas with no transport network available. In this paper, we assess the packet level performance of mmWave technology for cost-efficient backhauling of remote 3GPP NR APs connectivity “islands”. Using a queuing system with arrival processes of the same priority competing for transmission resources, we assess the aggregated and per-AP packet loss probability as a function environmental conditions, mmWave system specifics, and generated traffic volume. We show that the autocorrelation in aggregated traffic provides a significant impact on service characteristics of mmWave backhaul and needs to be compensated by increasing either emitted power or the number of antenna array elements. The effect of autocorrelation in the per-AP traffic and background traffic from other APs also negatively affects the per-AP packet loss probability. However, the effect is of different magnitude and heavily depends on the load fraction of per-AP traffic in the aggregated traffic stream. The developed model can be used to parameterize mmWave backhaul links as a function of the propagation environment, system design, and traffic conditions.
There is an ongoing evolution involving a new approach to large-scale optimisations based on co-evolutionary searches using interacting heterogeneous agent-processes via the implementation of synchronised genetic algorithms with local populations. The individualisation of heuristic operators at the level of agent-processes that implement independent evolutionary searches facilitate the improved likelihood of obtaining the best solutions in the fastest time. Based on this property, a parallel multi-agent single-objective real-coded genetic algorithm for large-scale constrained black-box single-objective optimisations (LSOPs ) is proposed. This facilitates the effective frequency exchange of the best potential decisions between interacting agent-processes with individual parameters, such as types of crossover and mutation operators with their own characteristics. We have improved the quality of both solutions and the time-efficiency of a multi-agent real-coded genetic algorithm (MA−RCGA ). A novel framework was developed that represents the aggregation of MA−RCGA with simulation models by implementing a set of objective functions for real-world large-scale optimisation problems such as the simulation model of the ecological-economics system implemented in the AnyLogic tool.
This paper suggests an algorithm for stress testing of the credit risk of a Russian commercial bank, intended for use by investors and bank customers to assess the bank’s financial stability under stressful scenarios. Indicator of bank losses in this work is the indicator “loan loss provision”. An algorithm is proposed that describes the bank’s cash flows in stressful situations, taking into account the demand function for the loans of the analyzed bank, the bank’s availability of the necessary capital to increase the loan portfolio, and the availability of a sufficient amount of liquid to cover losses.
Sustaining a competitive edge in today’s business world requires innovative approaches to product, service, and management systems design and performance. Advances in computing technologies have presented managers with additional challenges as well as further opportunities to enhance their business models.
Software Engineering for Enterprise System Agility: Emerging Research and Opportunities is a collection of innovative research that identifies the critical technological and management factors in ensuring the agility of business systems and investigates process improvement and optimization through software development. Featuring coverage on a broad range of topics such as business architecture, cloud computing, and agility patterns, this publication is ideally designed for business managers, business professionals, software developers, academicians, researchers, and upper-level students interested in current research on strategies for improving the flexibility and agility of businesses and their systems.