Phone/Fax: (495) 771-32-38
33 Kirpichnaya Ulitsa
This work is devoted to the investigation of particle acceleration during magnetospheric dipolarizations. A numerical model is presented
taking into account the four scenarios of plasma acceleration that can be realized: (A) total dipolarization with characteristic time scales of
3 min; (B) single peak value of the normal magnetic component Bz occurring on the time scale of less than 1 min; (C) a sequence of rapid
jumps of Bz interpreted as the passage of a chain of multiple dipolarization fronts (DFs); and (D) the simultaneous action of mechanism (C)
followed by the consequent enhancement of electric and magnetic fluctuations with the small characteristic time scale 1 s. In a frame of the
model, we have obtained and analyzed the energy spectra of four plasma populations: electrons e, protons Hþ, helium Heþ, and oxygen Oþ
ions, accelerated by the above-mentioned processes (A)–(D). It is shown that Oþ ions can be accelerated mainly due to the mechanism (A);
Hþ and Heþ ions (and to some extent electrons) can be more effectively accelerated due to the mechanism (C) than the single dipolarization
(B). It is found that high-frequency electric and magnetic fluctuations accompanying multiple DFs (D) can strongly accelerate electrons e
and really weakly influence other populations of plasma. The results of modeling demonstrated clearly the distinguishable spatial and temporal
resonance character of particle acceleration processes. The maximum particle energies depending on the scale of the magnetic acceleration
region and the value of the magnetic field are estimated. The shapes of energy spectra are discussed.
The paper deals with cyclostationarity as a natural extension of stationarity as the key property in designing the widely-used models of random processes. The comparative example of two processes, one is wide-sense stationary and the other is wide-sense cyclostationary, is given in the paper and reveals the lack of the conventional stationary description based on one-dimensional autocorrelation functions. It is shown that two significantly different random processes appear to be characterized by exactly the same autocorrelation function while their two-dimensional autocorrelation functions provide outlook where the difference between processes of two above-mentioned classes becomes much clearer. More concise representation by expanding the two-dimensional autocorrelation function to its Fourier series where the cyclic frequency appears as the transform parameter is illustrated. The closed-form expression for the components of the cyclic autocorrelation function is also given for the random process which is an infinite pulse train made of rectangular pulses with randomly varying amplitudes.
Urban greenery such as trees can effectively reduce air pollution in a natural and eco-friendly way. However, how to spatially locate and arrange greenery in an optimal way remains as a challenging task. We developed an agent-based model of air pollution dynamics to support the optimal allocation and configuration of tree clusters in a city. The Pareto optimal solutions for greenery in the city were computed using the suggested heuristic optimisation algorithm, considering the complex absorptive-diffusive interactions between agent-trees (tree clusters) and air pollutants produced by agent-enterprises (factories) and agent-vehicles (car clusters) located in the city. We applied and tested the model with empirical data in Yerevan, Armenia, and successfully found the optimal strategy under the budget constraint: planting various types of trees around kindergartens and emission sources.
Evolution on changing fitness landscapes (seascapes) is an important problem in evolutionary biology. We
consider the Moran model of finite population evolution with selection in a randomly changing, dynamic
environment. In the model, each individual has one of the two alleles, wild type or mutant. We calculate the
fixation probability by making a proper ansatz for the logarithm of fixation probabilities. This method has been
used previously to solve the analogous problem for the Wright-Fisher model. The fixation probability is related to
the solution of a third-order algebraic equation (for the logarithm of fixation probability).We consider the strong
interference of landscape fluctuations, sampling, and selection when the fixation process cannot be described by
the mean fitness. Such an effect appears if the mutant allele has a higher fitness in one landscape and a lower
fitness in another, compared with the wild type, and the product of effective population size and fitness is large.
We provide a generalization of the Kimura formula for the fixation probability that applies to these cases. When
the mutant allele has a fitness (dis-)advantage in both landscapes, the fixation probability is described by the
An age-structured bioeconomic model, which is completely continuous in age and time, is developed in order to compare with traditional discrete models. Both types have advantages and disadvantages. The continuous framework complements discrete models as it allows for deeper and more transparent analytical study and leads to analytical results that would be difficult to achieve within a discrete framework. To make the model realistic, a nonlinear recruitment function is introduced and steady state solutions and constant-effort optimal fishing are studied analytically. In addition, the framework has been used for numerical analysis. Simulations are used to investigate how optimal harvesting patterns vary with parameter values.
This paper introduces the maximum likelihood estimator (MLE) based on artificial neural network (ANN) for a fast computation of the bearing that indicates the direction to the source of the electromagnetic wave received by a passive radar system equipped with an array antenna. Authors propose the cascade scheme for ANN training phase where the network is fed with the pair-wise delays of received stationary or cyclostationary signals and the output of the network goes to the input of the target function being maximized together with the same data. The designed ANN topology has the modified output layer consisting of the custom neuron that implements argument function of a complex number rather than linear or sigmoid-like ones used in the conventional multilayer perceptron topologies. The simulation carried out for the ring array antenna shows that a single estimation obtained via ANN MLE takes 12 times less computational time comparing to the MLE implemented via the numerical optimization technique. The degradation of accuracy measured as the increase of mean-squared error does not exceed 10% of the potential value for the particular signal-to-noise ratio (SNR) and that difference has no tendency to decrease for higher SNR. The estimation error appeared to be independent from the true value in the wide range of bearings.
Information technology (IT) is an indispensable tool for any organization today, so the choice of adequate IT solutions is a critically important skill. In the literature, many methods for selecting IT solutions have been proposed, but often they use vague criteria that are very difficult to quantify and complex methods to compare alternatives. So, the application of these methods outside the theoretical articles is restricted, since practitioners need simpler approaches. We propose a simple method of the evaluation of alternative IT solutions based on five criteria, namely the cost of ownership, the time for the change, security risks, acceptance by users, and confidence in the supplier's ability to implement the solution. In accordance with the theory of probabilistic mental models, a reference class is proposed for each criterion and variables that can be measured quantitatively are chosen on its base. To simplify the decision-making process, a weighted production model is used for the comparison of alternatives.
Mathematical modeling of a stock market functioning is one of the actual and at the same time complex task of the modern theoretical economics. From our point of view, building such mathematical models “ab initio”, by using analogy between the stock market and a certain physical system (in our work, laser), is the most promising approach. This paper proposes a simple econophysical model of stock market as an open nonequilibrium system in form of Lorenz–Haken equation. In this system, variation of ask price, variation of bid price, and instantaneous difference between numbers of agents in active and passive state are intensity of external information flow is a control parameter. This model explains the impossibility of existence of an equilibrium state of the market and shows the presence of deterministic chaos in a stock market.
In this article we aim to highlight the problems related to the structure and stability of the
comparatively thin current sheets that were relatively recently discovered by space missions in
the magnetospheres of the Earth and planets, as well as in the solar wind. These magnetoplasma
structures are universal in collisionless cosmic plasmas and can play a key role in the processes
of storage and release of energy in the space environment. The development of a self-consistent
theory for these sheets in the Earth’s magnetosphere, where they were first discovered, has a long
and dramatic history. Solution of the problem of the thin current sheet structure and stability
become possible in the framework of a kinetic quasi-adiabatic approach required to explain their
embedding and metastability properties. It was found that the structure and stability of current
structures are completely determined by the nonlinear dynamics of plasma particles. Theoretical
models have been developed to predict many properties of these structures and interpret many
experimental observations in planetary magnetospheres and the heliosphere.
This article describes the problem of analysis of social network graphs and other interacting objects. It also presents community detection algorithms in social networks and their classification and analysis. In addition, it considers applicability of algorithms for real tasks in social network graph analysis.
The primary purpose of this paper is to provide an overview of existing education solutions for IoT and develop proposals for their improvement. The study draws analysis of current conditions of the educational IoT sphere, a comparative analysis of educational products used for teaching of undergraduate students. With that the article describes the architecture of our own software and hardware platform for learning IOT. Moreover, this paper reviews methods and technical instruments employed to design software and hardware appliances.
This book contains a selection of papers accepted for the presentation and discussion at the 2018 International Conference on Digital Science (DSIC’18). This Conference had the support of the Institute of Certified Specialists, Russia, AISTI (Iberian Association for Information Systems and Technologies), and Springer. It will take place Convention Centre, Budva, Montenegro, October 19-21, 2018.
DSIC’18 is an international forum for researches and practitioners to present and discuss the most recent innovations, trends, results, experiences, and concerns in the several perspectives of Digital Science. The main idea of this Conference is that the world of science is united allowing all scientists/practitioners to be able to think, analyze, and generalize their thoughts.
DSIC aims efficiently to disseminate original research results in natural, social, art, and humanities sciences. An important characteristic feature of the Conference should be the short publication time and worldwide distribution. This Conference enables fast dissemination, so conference participants can publish their papers in print and electronic format, which is then made available worldwide and accessible by numerous researchers.
The Scientific Committee of DSIC’18 was composed of multidisciplinary group of 26 experts. One hundred and seven invited reviewers who are intimately conceded with Digital Science have had the responsibility for evaluating, in a “double-blind review” process, the papers received for each of the main themes proposed for the Conference: Digital Art and Humanities; Digital Economics; Digital Education; Digital Engineering, Digital Environmental Sciences; Digital Finance; Business and Banking; Digital Media; Digital Medicine; Pharma and Public Health; Digital Public Administration; Digital Technology and Applied Sciences.
DSIC’18 received 88 contributions from 16 countries around the world. The papers accepted for the presentation and discussion at the Conference are published by Springer (this book) and will be submitted for indexing by ISI, SCOPUS, among others.
The smart monitoring system (SMS) vision relies on the use of ICT to efficiently manage and maximize the utility of network infrastructures and services in order to improve the quality of service and network performance. Many aspects of SMS projects are dynamic data driven application system where data from sensors monitoring the system state are used to drive computations that in turn can dynamically adapt and improve the monitoring process as the complex system evolves. In this context, a research and development of new paradigm of Distributed Big Data Driven Framework (DBDF) for monitoring data in mobile network infrastructures entails the ability to dynamically incorporate more accurate information for network monitoring and controlling purposes through obtaining real-time measurements from the base stations, user demands and claims, and other sensors (for weather conditions, etc.). The proposed framework consists of network probes, data parsing application, Message-Oriented Middleware, real-time and offline data models, Big Data storage and Decision layers., and Other data sources. Each Big Data layer might be implemented using comparative analysis of the most effective Big Data solutions. In addition, as a proof of concept, the roaming users detection model was created based on Apache Spark application. The model filters streaming protocols data, deserializes it into Json format and finally sends it to Kafka application. The experiments with the model demonstrated and acknowledged the capacities of the Apache Spark in building foundation for Big Data hub as a basic application for online mobile network data processing.
This paper describes and analyses optimization approaches, which make possible the exact calculation of millions of hierarchical count distinct measures over hundreds of billions data rows. Described approach evolved for several years, in parallel with the growth of tasks from a fast growing internet company, and was finally implemented as a PEAPM (Pipelined Exact Accumulation for Paralleled Measures) algorithm. Current version of an algorithm outputs exact values (not estimates), works in a single thread, in minutes using a general commodity hardware, and requires volume of RAM equal to the doubled size of required measures/
In this paper, we describe a deep-learning system for emotion detection in textual conversations that participated in SemEval-2019 Task 3 “EmoContext”. We designed a specific architecture of bidirectional LSTM which allows not only to learn semantic and sentiment feature representation, but also to capture user-specific conversation features. To fine-tune word embeddings using distant supervision we additionally collected a significant amount of emotional texts. The system achieved 72.59% micro-average F1 score for emotion classes on the test dataset, thereby significantly outperforming the officially-released baseline. Word embeddings and the source code were released for the research community.
The paper reviewed and analyzed protocols, technologies for transferring and presenting IoT data, developed a model of a heterogeneous IoT network for hard-to-reach areas, proposed a method to improve the efficiency of data transfer in a heterogeneous IoT network. As a result of the work, a model of using the Internet of Things technology (LPWAN) in hard-to-reach areas was developed, information presentation methods were identified that allow solving the problem of collecting information from remote sensors located in the absence of traditional communication channels and a practical check of the results obtained. The paper uses simulation modeling to study the applicability of different methods of presenting information in the case of transmitting IoT data over low-speed satellite communications channels. The method proposed in the paper allowed the use of the Internet of things technology in remote areas using the SBD satellite short message service. The proposed method allowed reducing the volume and number of SBD messages during data transmission via low-speed satellite communication channels, which made it possible to reduce the cost of communication data transmission by 4.82 times.
Intelligent computer systems aim to help humans in making decisions. Many practical decision-making problems are classification problems in their nature, but standard classification algorithms often not applicable since they assume balanced distribution of classes and constant misclassification costs. From this point of view, algorithms that consider the cost of decisions are essential since they are more consistent with the requirements of real life. These algorithms generate decisions that directly optimize parameters valuable for business, for example, the costs savings. But despite on practical value of cost-sensitive algorithms, the little number of works study this problem concentrating mainly on the case when the cost of a classifier error is constant and does not depend on a specific example. However, many real-world classification tasks are example-dependent cost-sensitive (ECS), where the costs of misclassification vary between examples and not only within classes. Existing methods of ECS learning include just modifications of the simplest models of machine learning (naive Bayes, logistic regression, decision tree). These models produce promising results, but there is a need for further improvement in performance that can be achieved by using gradient-based ensemble methods. To break this gap, we present the ECS generalization of AdaBoost. We study three models which differ by the ways to introduce cost into the loss function: inside the exponent, outside the exponent, and both inside and outside the exponent. The results of the experiments on three synthetic and two real datasets (bank marketing and insurance fraud) show that example-dependent cost-sensitive modifications of AdaBoost outperform other known models. Empirical results also show that critical factors influencing the choice of the model are not only the distribution of features, which is typical for cost-insensitive and class-dependent cost-sensitive problems but also the distribution of costs. Next, since the outputs of AdaBoost are not well calibrated posterior probabilities, we check three approaches to calibration of classifier scores: Platt scaling, isotonic regression, and ROC modification. The results show that calibration not only significantly improves the performance of specific ECS models but allows making better capabilities of original AdaBoost. Obtained results provide new insight regarding the behavior of the cost-sensitive model from a theoretical point of view and prove that the presented approach can significantly improve the practical design of intelligent systems.
The current level of maintaining and developing the effectiveness of process stakeholders has become a technologically demanding task involving ever-increasing costs. The belief that the upcoming digital transformation (DT) will represent a panacea is misguided since DT requires fundamental re-education and a restructuring of all process environments and human factors. Regardless of the business sector, DT is expected to accelerate as technology advances; new entrants and new forms of business partnerships change all the rules of the current stream.
In large-scale research, data are usually collected on many sites, have a huge volume, and new data are constantly generated. Since it is often impossible to collect all the relevant data on a single computer, much attention is paid to the algorithms that provide sequential or parallel accumulation of information and do not need to store all the original data. As an example of information accumulation, the Bayesian updating procedure for linear experiments is analyzed. The corresponding information spaces are defined and the relations between them are studied. It is shown that processing can be unified and simplified by introducing a special canonical form of information representation and transforming all the data and the original prior information into this form. Thanks to the rich algebraic properties of the canonical information space, the sequential Bayesian procedure allows various parallelization options that are ideally suited for distributed data processing platforms, such as Hadoop MapReduce. This opens up the possibility of a flexible and efficient scaling of information accumulation in distributed data processing systems.
Distinguishing outliers from normal data in wireless sensor networks has been a big challenge in the anomaly detection domain, mostly due to the nature of the anomalies, such as software or hardware failures, reading errors or malicious attacks, just to name a few. In this article, we introduce an anomaly detection-based OPF classifier in the aforementioned context. The results are compared against one-class support vector machines and multivariate Gaussian distribution. Additionally, we also propose to employ meta-heuristic optimization techniques to finetune the OPF classifier in the context of anomaly detection in wireless sensor networks.