Phone/Fax: (495) 771-32-38
33 Kirpichnaya Ulitsa
School Head — Svetlana Maltseva
Deputy Head of Research and Partnerships — Vasily Kornilov
Deputy Head for Prospective Student and Alumni Affairs — Vladimir Samodurov
Deputy Head for Academics — Olga Tsukanova
Deputy Head for International Relations — Michael Komarov
Mathematical modeling of a stock market functioning is one of the actual and at the same time complex task of the modern theoretical economics. From our point of view, building such mathematical models “ab initio”, by using analogy between the stock market and a certain physical system (in our work, laser), is the most promising approach. This paper proposes a simple econophysical model of stock market as an open nonequilibrium system in form of Lorenz–Haken equation. In this system, variation of ask price, variation of bid price, and instantaneous difference between numbers of agents in active and passive state are intensity of external information flow is a control parameter. This model explains the impossibility of existence of an equilibrium state of the market and shows the presence of deterministic chaos in a stock market.
A powerful and effective store network administration sites, a smooth running of a business, quality items or potential benefits, consequently escalating client trust and business picks up. To land at this phase in an association's business outgrowth all rely upon the best choices that must be taken to move the correct way. These choices are normally aftereffects of past events and outside information that have been incorporated to give data or learning. For this reason, a decision support system is hereby proposed to be able to pool data from the 3 main decision Phases of a Supply Chain Management - (1) Supply chain Design (Strategy) phase, (2) Decision in Supply Chain Planning (Management) phase, and (3) Operational level phase. This Proposed framework is pointed and boosting basic leadership process, by giving brought together data to help the hierarchical basic leadership process.
A fridge plays an important role in the kitchen in comparison to other appliances because it helps to store food products at optimal conditions for a long period of time. The ordinary refrigerators perfectly allow preserving meals but they are not effective in case of food management. Providing a remote control for home appliances extends the everyday usage of these devices. In addition to the remote control device, some manufacturers use additional modules such as internal cameras and hands-free speaker for convenient control of an appliance. All these devices are able to communicate with each other to reach common goals. The home appliance producer Liebherr in cooperation with technology company Microsoft developed a solution for remote control of refrigeration with possibility of food recognition using Machine Learning algorithms. This option enables automatic compiling of the list of food stored in the fridge and food ordering in an online shop without manual actions. This opportunity enables not only a convenient usage of an appliance but also allows reduction of electricity consumption because user does not open fridge doors frequently as far as he knows a list of food in refrigerator. In this paper we describe SmartDevice technology from Liebherr that was developed for adding smart features to the brand products. In particular, we review main business processes of SmartDevice, discuss advantages and disadvantages of this solution for the end customers and identify future research for creating smart fridges.
A modern enterprise has to react to permanent changes in the business environment by transformation of its own behavior, operational practices and business processes. Such transformations may range from changes of business processes to changes of information systems used to support the business processes, changes in the underlying IT infrastructures and even in the enterprise information system as a whole. The main characteristic of changes in a turbulent business environment and, consequently, in the enterprise information system is unpredictability. Therefore, an enterprise information system should support the operational efficiency of the current business model, as well as provide the necessary level of agility to implement future unpredictable changes of requirements.
This article aims to propose a conceptual model of an agile enterprise information system, which is defined as a working system that should eliminate the largest possible number of gaps caused by external events through incremental changes of its own components. A conceptual model developed according to the socio-technical approach includes structural properties of an agile enterprise information system (actors, tasks, technology, and structure). Structural properties define its operational characteristics, i.e. measurable indicators of agility – time, costs, scope and robustness of process of change. Different ways to build such an agile system are discussed on the basis of axiomatic design theory. We propose an approach to measurement of time, cost, scope and robustness of changes which helps to make quantitative estimation of the achieved level of agility.
We consider interior and exterior initial boundary value problems for the three-dimensional
wave (d’Alembert) equation. First, we reduce a given problem to an equivalent operator
equation with respect to unknown sources deﬁned only at the boundary of the original
domain. In doing so, the Huygens’ principle enables us to obtain the operator equation
in a form that involves only ﬁnite and non-increasing pre-history of the solution in time.
Next, we discretize the resulting boundary equation and solve it eﬃciently by the method
of difference potentials (MDP). The overall numerical algorithm handles boundaries of
general shape using regular structured grids with no deterioration of accuracy. For long
simulation times it offers sub-linear complexity with respect to the grid dimension, i.e., is
asymptotically cheaper than the cost of a typical explicit scheme. In addition, our algorithm
allows one to share the computational cost between multiple similar problems. On multi-
processor (multi-core) platforms, it beneﬁts from what can be considered an effective
parallelization in time.
Researchers face fundamental challenges applying the stochastic geometry framework to analysis of terahertz (THz) communications systems. The two major problems are the principally new propagation model that now includes exponential term responsible for molecular absorption and blocking of THz radiation by the human crowd around the receiver. These phenomena change the probability density function (pdf) of the interference from a single node such that it no longer has an analytical Laplace transform (LT) preventing characterization of the aggregated interference and signal-to-interference ratio (SIR) distributions. The expected use of highly directional antennas at both transmitter and receiver adds to this problem increasing the complexity of modeling efforts. In this paper, we consider Poisson deployment of interferers in ℜ 2 and provide accurate analytical approximations for pdf of interference from a randomly chosen node for blocking and non-blocking cases. We then derive LTs of pdfs of aggregated interference and SIR. Using the Talbot’s algorithm for inverse transform we provide numerical results indicating that failure to capture atmospheric absorption, blocking or antenna directivity leads to significant modeling errors. Finally, we investigate the response of SIR densities to a wide range of system parameters highlighting the specific effects of THz communications systems. The model developed in this paper can be used as a building block for performance analysis of realistic THz network deployments providing metrics such as outage and coverage probabilities.
We use Cluster and THEMIS simultaneous observations to study the spatial distributions of a shear BY field in the Plasma Sheet (PS) of the Earth's magnetotail at 31 RE < X < 9 RE. The best correlation between the BY field in the PS (BY_PS) and the Y-component of the Interplanetary Magnetic Field (IMF) (BY_IMF) was observed during the quiet PS periods when high speed plasma flows were not detected. During active PS periods the correlation between the BY_PS and BY_ IMF was poor. The analysis of spatial distribution of the BY field along the direction perpendicular to the Current Sheet (CS) plane showed the presence of one of the following configurations, which can be self-consistently generated in the CS: 1) the “quadrupole” distribution of the BY field usually associated with the Hall current system in the vicinity of X-line and 2) the symmetrical “bell-shaped” distribution formed due to the BY amplification near the neutral plane of the CS. Multipoint observations revealed the transient appearance of the “quadrupole” BY distribution during the periods of X-line formation in the mid-tail. This distribution was observed during a few minutes within, at least, 12 RE from the estimated X-line position. On the contrary, the symmetrical “bell-shaped” distribution is more localized in the radial direction and, generally, has a larger observation time (up to ~10 min). Thus, the internal CS perturbations caused either by the Hall currents related to reconnection or by the peculiarities of the local quasi-adiabatic ion dynamics sufficiently affect the shear BY field existing in the magnetotail due to the partial IMF penetration.
In the 1960s, the so-called “software crisis” triggered the advent of software engineering as a discipline. The idea was to apply the engineering methods of material production to the new domain of large-scale concurrent software systems in order to make the software projects more accurate and predictable. This software engineering approach was feasible, though the methods and practices used had to differ substantially from those used in the material production. The focus of the software engineering discipline was the “serial” production of substantially large-scale, complex and high quality software systems. Researchers argue whether the crisis in software engineering is over yet. The software crisis originates from a number of factors; these are human-related and technology-related factors. To manage this crisis, the authors suggest a set of software engineering methods, which systematically optimize the lifecycles for both types of these influencing factors. This lifecycle optimization strategy includes crisis-responsive methodologies, system-level architectural patterns, informing process frameworks, and a set of knowledge transfer principles. Software development usually involves customers, developers and their management; each of these parties has different preferences and expectations. These parties often differ in their vision of the resulting product; typically, the customers focus on business value while the developers are concerned with technological aspects. Such a difference in focus often results in crises. Thus, the software crises often have a human factor-related root cause. To deal with these kind of crises, software engineers should enhance their skillset with managerial skills, such as teamwork, communications, negotiations, and risk management.
Licensed assisted access (LAA) enables the coexistence of long-term evolution (LTE) and WiFi in unlicensed bands, while potentially offering improved coverage and data rates. However, cooperation with the conventional random-access protocols that employ listen-before-talk (LBT) considerations makes meeting the LTE performance requirements difficult, since delay and throughput guarantees should be delivered. In this paper, we propose a novel channel sharing mechanism for the LAA system that is capable of simultaneously providing the fairness of resource allocation across the competing LTE and Wi-Fi sessions as well as satisfying the quality-of-service guarantees of the LTE sessions in terms of their upper delay bound and throughput. Our proposal is based on two key mechanisms: 1) LAA connection admission control for the LTE sessions and 2) adaptive duty cycle resource division. The only external information necessary for the intended operation is the current number of active Wi-Fi sessions inferred by monitoring the shared channel. In the proposed scheme, LAA-enabled LTE base station fully controls the shared environment by dynamically adjusting the time allocations for both Wi-Fi and LTE technologies, while only admitting those LTE connections that should not interfere with Wi-Fi more than another Wi-Fi access point operating on the same channel would. To characterize the key performance trade-offs pertaining to the proposed operation, we develop a new analytical model. We then comprehensively investigate the performance of the developed channel sharing mechanism by confirming that it allows to achieve a high degree of fairness between the LTE and Wi-Fi connections as well as provides guarantees in terms of upper delay bound and throughput for the admitted LTE sessions. We also demonstrate that our scheme outperforms a typical LBT-based LAA implementation
The importance of the problem under investigation is to find an effective way to manage the defaults occurred in case of a project which has not enough control during the process of implementation. Usually it goes to delays, and as a consequence to it in very poor quality. The purpose of the article is to provide the project with the necessary level of control by placing control points in it. The article suggests methods for determining the places and necessity for conducting inspections during the construction period of the project. The materials of the article can be used by project managers for more efficient and qualitative management, for faster completion with the lowest possible cost in the highest possible quality.
Companies are increasingly paying close attention to the IP portfolio, which is a key competitive advantage, so patents and patent applications, as well as analysis and identification of future trends, become one of the important and strategic components of a business strategy. We argue that the problems of identifying and predicting trends or entities, as well as the search for technical features, can be solved with the help of easily accessible Big Data technologies, machine learning and predictive analytics, thereby offering an effective plan for development and progress. The purpose of this study is twofold, the first is an identification of technological trends, the second is an identification of application areas and/or that are most promising in terms of technology development and investment. The research was based on methods of clustering, processing of large text files and search queries in patent databases. The suggested approach is considered on the basis of experimental data in the field of moving connected UAVs and passive acoustic ecology control.
We present our observations of electromagnetic transients associated with GW170817/GRB 170817A using optical telescopes of Chilescope observatory and Big Scanning Antenna (BSA) of Pushchino Radio Astronomy Observatory at 110 MHz. The Chilescope observatory detected an optical transient of ∼19m on the third day in the outskirts of the galaxy NGC 4993; we continued observations following its rapid decrease. We put an upper limit of 1.5 × 104 Jy on any radio source with a duration of 10–60 s, which may be associated with GW170817/GRB 170817A. The prompt gamma-ray emission consists of two distinctive components—a hard short pulse delayed by ∼2 s with respect to the LIGO signal and softer thermal pulse with T ∼ 10 keV lasting for another ∼2 s. The appearance of a thermal component at the end of the burst is unusual for short GRBs. Both the hard and the soft components do not satisfy the Amati relation, making GRB 170817A distinctively different from other short GRBs. Based on gamma-ray and optical observations, we develop a model for the prompt high-energy emission associated with GRB 170817A. The merger of two neutron stars creates an accretion torus of ∼10‑2 M ⊙, which supplies the black hole with magnetic flux and confines the Blandford–Znajek-powered jet. We associate the hard prompt spike with the quasispherical breakout of the jet from the disk wind. As the jet plows through the wind with subrelativistic velocity, it creates a radiation-dominated shock that heats the wind material to tens of kiloelectron volts, producing the soft thermal component.
This book discusses smart, agile software development methods and their applications for enterprise crisis management, presenting a systematic approach that promotes agility and crisis management in software engineering. The key finding is that these crises are caused by both technology-based and human-related factors. Being mission-critical, human-related issues are often neglected. To manage the crises, the book suggests an efficient agile methodology including a set of models, methods, patterns, practices and tools. Together, these make a survival toolkit for large-scale software development in crises. Further, the book analyses lifecycles and methodologies focusing on their impact on the project timeline and budget, and incorporates a set of industry-based patterns, practices and case studies, combining academic concepts and practices of software engineering.
The spatial distributions of the magnetic field, plasma density, and current at distances of
(20−400)RS from the Sun (where RS is the solar radius) are investigated within a stationary axisymmetric
MHD model of the solar wind (SW) at all latitudes in the inertial frame of reference with the origin at the
center of the Sun. The model takes into account differential (with respect to the heliolatitude) rotation of the
Sun and full corotation of plasma inside a boundary sphere of radius 20RS, which breaks down beyond this
sphere. Self-consistent distributions of the plasma density, current, and magnetic field in the SW are obtained
by numerically solving a set of time-independent MHD equations in spherical coordinates. It is demonstrated
that the calculated results do not contradict observational data and describe a gradual transition from the fast
SW at high heliolatitudes to the slow SW at low heliolatitudes, as well as the steepening of the profiles of the
main SW characteristics with increasing distance from the Sun. The obtained dependences extend understanding
of the SW structure at low and high latitudes and agree with the well-known Parker model in the
limit of a small Ampère force.
In this paper, the impact of lethal mutations on evolutionary dynamics of asexual populations is analyzed. We suggest distinguishing different definitions of lethality, which lead to different mathematical formalizations of the microscopic model. Most of the studies focus on polyphasic lethality, meaning that individuals carrying lethal mutations have no offspring but consume common resources. In an alternative problem setting, monophasic lethal mutants die without giving offspring on the first stage of development. In the third case, semi-lethal mutations are considered when the lethal mutants survive with some probability. We suggest and investigate mathematical models for these cases, deriving the evolutionary characteristics of the steady state. We found that the peak sequence probability drastically depends on the version of lethality. The results obtained here can be used to solve the error threshold paradox at the origin of life.
In the process of astronomical observations collected vast amounts of data. BSA (Big Scanning Antenna) LPI used in the study of impulse phenomena, daily logs 87.5 GB of data (32 TB per year). These data have important implications for both short-and long-term monitoring of various classes of radio sources (including radio transients of different nature), monitoring the Earth's ionosphere, the interplanetary and the interstellar plasma, the search and monitoring of different classes of radio sources. In the framework of the studies discovered 83096 individual pulse events (in the interval of the study highlighted July 2012 - October 2013), which may correspond to pulsars, twinkling springs, and a rapid radio transients. Detected impulse events are supposed to be used to filter subsequent observations. The study suggests approach, using the creation of the multilayered artificial neural network, which processes the input raw data and after processing, by the hidden layer, the output layer produces a class of impulsive phenomena.
Magnetic field dipolarizations are often observed in the magnetotail during substorms. These generally include three temporal scales: (1)
actual dipolarization when the normal magnetic field changes during several minutes from minimum to maximum level; (2) sharp Bz
bursts (pulses) interpreted as the passage of multiple dipolarization fronts with characteristic time scales < 1 min, and (3) bursts of electric
and magnetic fluctuations with frequencies up to electron gyrofrequency occurring at the smallest time scales (≤ 1 s). We present a
numerical model where the contributions of the above processes (1)-(3) in particle acceleration are analyzed. It is shown that these processes
have a resonant character at different temporal scales. While O+ ions are more likely accelerated due to the mechanism (1), H+
ions (and to some extent electrons) are effectively accelerated due to the second mechanism. High-frequency electric and magnetic fluctuations
accompanying magnetic dipolarization as in (3) are also found to efficiently accelerate electrons.
System design where cyber-physical applications are securely coordinated from the cloud may simplify the development process. However, all private data are then pushed to these remote “swamps,” and human users lose actual control as compared to when the applications are executed directly on their devices. At the same time, computing at the network edge is still lacking support for such straightforward multidevice development, which is essential for a wide range of dynamic cyber-physical services. This article proposes a novel programming model as well as contributes the associated secure-connectivity framework for leveraging safe coordinated device proximity as an additional degree of freedom between the remote cloud and the safety-critical network edge, especially under uncertain environment constraints. This article is part of a special issue on Software Safety and Security Risk Mitigation in Cyber-physical Systems.
Sustaining a competitive edge in today’s business world requires innovative approaches to product, service, and management systems design and performance. Advances in computing technologies have presented managers with additional challenges as well as further opportunities to enhance their business models.
Software Engineering for Enterprise System Agility: Emerging Research and Opportunities is a collection of innovative research that identifies the critical technological and management factors in ensuring the agility of business systems and investigates process improvement and optimization through software development. Featuring coverage on a broad range of topics such as business architecture, cloud computing, and agility patterns, this publication is ideally designed for business managers, business professionals, software developers, academicians, researchers, and upper-level students interested in current research on strategies for improving the flexibility and agility of businesses and their systems.