The behavior of two-dimensional (2D) atom localization is explored by monitoring the probe absorption in a microwave-driven four-level atomic medium under the action of two orthogonal standing-wave fields. Because of the position-dependent atom-field interaction, the information about the position of the atom can be obtained via the absorption measurement of the weak probe field. It is found that the localization behavior is significantly improved due to the joint quantum interference induced by the standing-wave and microwave-driven fields. Most importantly, the atom can be localized at a particular position and the maximal probability of finding the atom in one period of the standing-wave fields reaches unity by properly adjusting the system parameters. The proposed scheme may provide a promising way to achieve high-precision and high-resolution 2D atom localization.
In this paper we propose a novel image representation method that characterizes an image as a spatiogram-a generalized histogram-of colors quantized by Gaussian Mixture Models (GMMs). First, we quantize the color space using a GMM, which is learned by the Expectation-Maximization (EM) algorithm from the training images. The number of Gaussian components (i.e., the number of quantized color bins) is determined automatically according to the Bayesian Information Criterion (BIC). Second, we incorporate the spatiogram representation with the quantized Gaussian mixture color model. Intuitively, a spatiogram is a histogram in which the distribution of colors is spatially weighted by the locations of the pixels contributing to each color bin. We have modified the spatiogram representation to fit our framework, which employs Gaussian color components instead of discrete color bins. Finally, the comparison between two images is achieved by measuring the similarity between two spatiograms, for which purpose we propose a new measurement adopting the Jensen-Shannon Divergence (JSD). We applied the new image representation and comparison method to the image retrieval task. The experiments on several publicly available COREL image datasets demonstrate the effectiveness of our proposed image representation for image retrieval. (C) 2015 Elsevier B.V. All rights reserved.
As a popular technique in recommender systems, Collaborative Filtering (CF) has been the focus of significant attention in recent years, however, its privacy-related issues, especially for the neighborhood-based CF methods, cannot be overlooked. The aim of this study is to address these privacy issues in the context of neighborhood-based CF methods by proposing a Private Neighbor Collaborative Filtering (PNCF) algorithm. This algorithm includes two privacy preserving operations: Private Neighbor Selection and Perturbation. Using the item-based method as an example, Private Neighbor Selection is constructed on the basis of the notion of differential privacy; meaning that neighbors are privately selected for the target item according to its similarities with others. Recommendation-Aware Sensitivity and a re-designed differential privacy mechanism are introduced in this operation to enhance the performance of recommendations. A Perturbation operation then hides the true ratings of selected neighbors by adding Laplace noise. The PNCF algorithm reduces the magnitude of the noise introduced from the traditional differential privacy mechanism. Moreover, a theoretical analysis is provided to show that the proposed algorithm can resist a KNN attack while retaining the accuracy of recommendations. The results from experiments on two real datasets show that the proposed PNCF algorithm can obtain a rigid privacy guarantee without high accuracy loss. Crown Copyright (C) 2013 Published by Elsevier B.V. All rights reserved.
In most methods for modeling mortality rates, the idiosyncratic shocks are assumed to be homoskedastic. This study investigates the conditional heteroskedasticity of mortality in terms of statistical time series. We start from testing the conditional heteroskedasticity of the period effect in the naive Lee-Carter model for some mortality data. Then we introduce the Generalized Dynamic Factor method and the multivariate BEKK GARCH model to describe mortality dynamics and the conditional heteroskedasticity of mortality. After specifying the number of static factors and dynamic factors by several variants of information criterion, we compare our model with other two models, namely, the Lee-Carter model and the state space model. Based on several error-based measures of performance, our results indicate that if the number of static factors and dynamic factors is properly determined, the method proposed dominates other methods. Finally, we use our method combined with Kalman filter to forecast the mortality rates of Iceland and period life expectancies of Denmark, Finland, Italy and Netherlands. (C) 2009 Elsevier B.V. All rights reserved.
Advances in Difference Equations,2013年2013(1):148- ISSN：1687-1847
[Hua Yang] Wuhan Polytech Univ, Sch Math & Comp Sci, Wuhan, Hubei, Peoples R China.;[Feng Jiang] Zhongnan Univ Econ & Law, Sch Stat & Math, Wuhan, Hubei, Peoples R China.
[Yang, Hua] Wuhan Polytech Univ, Sch Math & Comp Sci, Wuhan, Hubei, Peoples R China.
In this paper, we study the exponential stability in the p th moment of mild solutions to impulsive stochastic neutral partial differential equations with memory. Sufficient conditions ensuring the stability of the impulsive stochastic system are obtained by establishing a new integral inequality. The results obtained here generalize and improve some well-known results.
This paper presents an electrocardiographic signal classification method. This method first uses a revised locally linear embedding (LLE) algorithm to perform dimension reduction to the Electrocardiography (ECG) data. This manifold distance measurement-based LLE was proposed to solve the defect of conventional LLE that, due to the use of Euclidean distance, it could not properly measure the distance between high-dimensional samples. Using this revised LLE algorithm for the dimension reduction of data, more original data information could be retained, and the features of high-dimensional ECG data could be more effectively extracted, thereby improving the classification accuracy. The method then adopts kernel-based fuzzy C-means clustering algorithm to perform ECG signal classification. Classification tests on four common types of ECG signals-from the MIT-BIN database showed that the proposed method reached an overall accuracy of 99%.
This work was supported by the National Science Foundation (Grant No. 79970025, 69874018) and National Defence Science and Technology Foundation (Grant No. OOJ15.3.JWO528). Based on ideal absolute errors and relative errors, a new grey model‐GMp(1,1) model is presented. The existence problem of its solution is also solved based on a few conditions. Then, MGMp(1,n) model is presented. These optimized models GMp(1,n) and MGMp(1,n) have good anti‐noise property to absolute and relative errors. Examples illustrate that they have very good fitting and forecasting results.
2nd International Conference on Optimization and Control (ICOCO)
DEC 07-09, 2015
Chongqing, PEOPLES R CHINA
[Jiang, Feng] Zhongnan Univ Econ & Law, Sch Math & Stat, Wuhan 430073, Peoples R China.^[Yang, Hua] Wuhan Polytech Univ, Sch Math & Comp Sci, Wuhan 430023, Peoples R China.^[Tian, Tianhai] Monash Univ, Sch Math Sci, Melbourne, Vic 3800, Australia.
Ait-Sahalia-Rho model;boundedness;convergence in probability
The Ait-Sahalia-Rho model is an important tool to study a number of financial problems, including the term structure of interest rate. However, since the functions of this model do not satisfy the linear growth condition, we cannot study the properties for the solution of this model by using the traditional techniques. In this paper we overcome the mathematical difficulties due to the nonlinear growth condition by using numerical simulation. Thus we first discuss analytical properties of the model and the convergence property of numerical solutions in probability for the Ait-Sahalia-Rho model. Finally, an example for option pricing is given to illustrate that the numerical solution is an effective method to estimate the expected payoffs.
teaching evaluation;method of weighted mean;classification
The paper selects certain features then classifies students and calculates the average score of each type student, ultimately by means of the method of weighted mean, gives each type of student a certain weight, to process the data of teaching evaluation. Compared with traditional method, the results of above method are closer to teacher's true level, and the data processing is also more scientific and reasonable. This paper also points out the disadvantages of current assessment system and recommends universities can adopt the method of weighted mean to improve the accuracy and effectiveness in current system.
A novel image segmentation method that combines spectral clustering and Gaussian mixture models is presented in this paper. The new method contains three phases. First, the image is partitioned into small regions modeled by a Gaussian Mixture Model (GMM), and the GMM is solved by an Expectation-Maximization (EM) algorithm with a newly proposed Image Reconstruction Criterion, named EM-IRC. Second, the distances among the GMM components are measured using Kullbacic-Leibler (KL) divergence, and a revised Floyd's algorithm developed from Zadeh's operations is used to build the similarity matrix based on those distances. Finally, spectral clustering is applied to this improved similarity matrix to merge the GMM components, i.e., the corresponding small image regions, to obtain the final segmentation result. Our contributions include the new EM-IRC algorithm, the revised Floyd's algorithm, and the novel overall framework. The experimental evaluation on the IRIS dataset and the real-world image segmentation problem demonstrates the effectiveness of our proposed approach. (C) 2014 Elsevier B.V. All rights reserved.
With hard requirements of high performance for the next generation mobile communication systems, especially 5G networks, coverage has been the crucial problem which requires the deployment of more stations by the service providers. However, this deployment of new stations is not cost effective and requires network replanning. This issue can easily be overcome by the use of Unmanned Aerial Vehicles (UAVs) in the existing communication system. Thus, considering this as a problem, an intelligent solution is presented for the accurate and efficient placement of the UAVs with respect to the demand areas resulting in the increase in the capacity and coverage of the wireless networks. The proposed approach utilizes the priority-wise dominance and the entropy approaches for providing solutions to the two problems considered in this paper, namely, Macro Base Station (MBS) decision problem and the cooperative UAV allocation problem. Finally, network bargaining is defined over these solutions to accurately map the UAVs to the desired areas resulting in the significant improvement of the network parameters, namely, throughput, per User Equipment (UE) capacity, 5th percentile spectral efficiency, network delays and guaranteed signal to interference plus noise ratio by 6.3%, 16.6%, 55.9%, 48.2%, and 36.99%, respectively in comparison with the existing approaches.