Phone: +7(495) 772-9590 * 26311, 26034
119049 Moscow, Shabolovka st. 28/11, b.4, room 1203
This concise book provides a survival toolkit for efficient, large-scale software development. Discussing a multi-contextual research framework that aims to harness human-related factors in order to improve flexibility, it includes a carefully selected blend of models, methods, practices, and case studies. To investigate mission-critical communication aspects in system engineering, it also examines diverse, i.e. cross-cultural and multinational, environments.
This book helps students better organize their knowledge bases, and presents conceptual frameworks, handy practices and case-based examples of agile development in diverse environments. Together with the authors’ previous books, "Crisis Management for Software Development and Knowledge Transfer" (2016) and "Managing Software Crisis: A Smart Way to Enterprise Agility" (2018), it constitutes a comprehensive reference resource that adds value to this book.
This paper summarizes practices of customer- driven services applied in the leading Russian bank to avoid the impact of financial sanctions (2014–2019). We show how economic sanctions and strict national policies triggered this bank to increase flexibility in customer care to attract more capital from their existing clients. The project comprised three stages: (1) to analyse requirements and to develop ‘‘as-is’’ state of processes; (2) to analyse best practices and to improve processes under the scope of flexibility and customer orientation; (3) to implement the new vision in ‘‘to-be’’ state and final verification. At the third research stage to assess the results of processes improvement in the bank within a year we have applied a set of methods based on data envelopment analysis which provides a multidimensional understanding of processes and new scopes of customer’s value profiles. We have found that process reengineering result could give the contribution already at the first month of implementation and argue the findings could be used to introduce flexible data-driven customer care and improve customer-related processes in organisations worldwide.
In this paper we propose an approach for compact storage of big graphs. We
propose the preprocessing algorithms of a certain type of graphs which can
signi cantly increase the data density on the disc and increase performance
of fundamental operations with graphs.
We investigate the evolutionary model with recombination and random switches in the fitness function due to change in a special gene. The dynamical behaviour of the fitness landscape induced by the specific mutations is closely related to the mutator phenomenon, which, together with recombination, plays an important role in modern evolutionary studies. It is of great interest to develop classical quasispecies models towards better compliance with the observation. However, these properties significantly increase the complexity of the mathematical models. In this paper, we consider symmetric fitness landscapes for several different environments, using the Hamilton-Jacobi equation (HJE) method to solve the system of equations at a large genome length limit. The mean fitness and surplus are calculated explicitly for the steady-state, and the relevance of the analytical results is supported by numerical simulation. We consider the most general case of two landscapes with any values of mutation and recombination rates (three independent parameters). The exact solution of evolutionary dynamics is done via a solution of a fourth-order algebraic equation. For the more straightforward case with two independent parameters, we derive the solution using a quadratic algebraic equation. For the simplest case, when there are two landscapes with the same mutation and recombination rates, we derive some effective fitness landscape, mapping the model with recombination to the Crow-Kimura model.
The effectiveness of the research results’ implementation is one of the main indicators that must be taken into account in the allocation of budget funds for research. The requirement of efficiency of spending budget funds allocated for research works leads to the necessity for continuous improvement of the methodological apparatus of decision support on the allocation of funds, including considering the efficiency of implementation of previously obtained research results of each certain research institution. With the development of information technology, it is important to improve the quality of the use of budgetary funds through information support for decision making on the organization of research in the Health Ministry of Russia, based on the assessment of the potential of the research institution, reflecting their ability to achieve the stated results in the execution of state contracts. The technique of integrated assessment of the effectiveness of research results in scientific institutions of the Health Ministry of Russia is obtained in the framework of state assignments or state contracts on the basis of a set of scientometric and statistical indicators, as well as expert evaluation. The requirements to the methodology and indicators, and the requirements to the approaches and methods of expert evaluation are consistently given.
The present paper is devoted to the study of the mechanics of agent-informational clustering in a social network on the example of user segmentation tasks taking into account an influence criterion. The main features of data generated by social networks (social big data) and metrics that characterize influential network nodes are considered. A review of community-building algorithms based on the theory of social networks, as well as clustering methods based on machine learning, is carried out. Metrics for assessing the quality of segmentation are presented. The results of the application of methods (selected on the basis of the performed analysis) to a test dataset are shown. The limitations of the applicability of considered approaches and possible problems during the implementation of algorithms in the field of social network analysis are described. Evaluation of the effectiveness is performed.
In data envelopment analysis, methods for constructing sections of the frontier have been recently proposed to visualize the production possibility set. The aim of this paper is to develop, prove and test the methods for the visualization of production possibility sets using parallel computations. In this paper, a general scheme of the algorithms for constructing sections (visualization) of production possibility set is proposed. In fact, the algorithm breaks the original large-scale problems into parallel threads, working independently, then the piecewise solution is combined into a global solution. An algorithm for constructing a generalized production function is described in detail.
We study a problem of designing an optimal two-dimensional circularly symmetric convolution kernel (or point spread function (PSF)) with a circular support of a chosen radius R. Such function will be optimal for estimating an unknown signal (image) from an observation obtained through a convolution-type distortion with the additive random noise. This technique is then generalized to the case of an imprecisely known or random PSF of the measurement distortion. It is shown that the construction of the optimal convolution kernel reduces to a one-dimensional Fredholm equation of the first or a second kind on the interval [0,R]. If the reconstruction PSF is sought in a finite-dimensional class of functions, the problem naturally reduces to a finite-dimensional optimization problem or even a system of linear equations. We also analyze how reconstruction quality depends on the radius of the convolution kernel. It allows finding a good balance between computational complexity and quality of the image reconstruction.
Procedures of sequential updating of information are important for “big data streams” processing because they avoid accumulating and storing large data sets. As a model of information accumulation, we study the Bayesian updating procedure for linear experiments. Analysis and gradual transformation of the original processing scheme in order to increase its efficiency lead to certain mathematical structures - information spaces. We show that processing can be simplified by introducing a special intermediate form of information representation. Thanks to the rich algebraic properties of the corresponding information space, it allows unifying and increasing the efficiency of the information updating. It also leads to various parallelization options for inherently sequential Bayesian procedure, which are suited for distributed data processing platforms, such as MapReduce. Besides, we will see how certain formalization of the concept of information and its algebraic properties can arise simply from adopting data processing to big data demands. Approaches and concepts developed in the paper allow to increase efficiency and uniformity of data processing and present a systematic approach to transforming sequential processing into parallel.
Despite the critical importance of achieving business and IT alignment in organizations, its practical operationalization at the level of specific actors, decisions, and documents still remains rather shadowy. This paper explains the operationalization of alignment as a pipeline with five distinct decision-making phases: positioning, focusing, prioritizing, assessing, and implementing.
Currently, there is a widespread introduction of quantum technologies in human activity. The prospects of quantum technologies use for the needs of biomedicine are considered. The necessity of the development of new quantum technologies and methods for organizing the processing and analysis of large biomedical data is substantiated. Opportunities and prospects of using modern quantum computers for the needs of biomedicine are being analyzed. The prospects for the use of quantum sensors in biomedicine are discussed. The possibility of using quantum communication lines in the near future to transmit confidential personalized biomedical information is being considered. Prospects for using quantum dots for the purpose of killing both multidrug-resistant bacteria and cancer cells are discussed.
A kinetic model is proposed to describe the self-organized criticality on Twitter. The model is based on a fractional three-parameter self-organization scheme with stochastic sources. It is shown that the adiabatic regime of self- organization to the critical state is determined by the coordinated action of a relatively small number of network users. The model is described the subcritical, self-organized critical and supercritical state of Twitter.
The paper considers one of the important elements of the business process of managing customer relationships in almost every enterprise – consumer’s claims management process. Timely analysis of customer complaints not only affects the company's reputation, but also to reduce costs, improve product quality. This article discusses the process of managing consumer complaints in accordance with regulatory documents – ISO 10002-2007 «Quality management – Customer satisfaction» As part of the work, a claims management model is being built, and a methodology for analyzing the repeatability of claims is being developed. The developed method of analysis of repeatability of claims was successfully tested at one of the enterprises of the military-industrial complex and can be applied in the future at other industrial enterprises.
Purpose: to introduce mathematical model of a distorted meaningful text and a measure of its distortion, to define a numerical classification of the distortion of meaningful texts, present applications of the model in cryptography.
Research methods: a more complex Vigenere cipher decryption that uses an almost periodic key (noisy) is performed as decryption of noisy plaintext on a periodical key (by well-known Vigenere decryption methods), but with a different probability distribution of plaintext characters, with further lead of the task to the acceptable noise level in plaintext determining for understanding this text content.
Results: gamming cipher decryption ways with weak keys are presented. A new complexity was obtained and the reliability of the method was improved due to the fact that there k-noisy weak (almost periodic) keys are more than periodic ones. The formula for calculating the probability of occurrence of characters after the k-th noise was obtained. Artificial languages for ease of calculation were introduced and practical examples of text noise (the necessary calculations were made using a written program in the python programming language) were considered. The quality of the distorted plaintext content was assessed by highlighting of two borders of understanding.
This tutorial discusses large scientific projects and the volumes of data generated by them, provides an overview of scientific computer networks that allow high-speed transmission of large amounts of data for these projects; computing systems offered by leading manufacturers of computer equipment for processing large amounts of data, and providing both the ability to store large amounts of data, including distributed, as well as analytics and parallel data processing in real time. Particular attention is paid to the safety of scientific information transmitted.