City University of Hong Kong
DSpace
 

CityU Institutional Repository >
3_CityU Electronic Theses and Dissertations >
ETD - Dept. of Mathematics  >
MA - Doctor of Philosophy  >

Please use this identifier to cite or link to this item: http://hdl.handle.net/2031/6636

Title: Statistical learning algorithms for regression and regularized spectral clustering
Other Titles: Tong ji xue xi zhong hui gui he zheng ze hua pu ju lei suan fa de yan jiu
統計學習中回歸和正則化譜聚類算法的研究
Authors: Lü, Shaogao ( 呂紹高)
Department: Department of Mathematics
Degree: Doctor of Philosophy
Issue Date: 2011
Publisher: City University of Hong Kong
Subjects: Machine learning -- Statistical methods.
Cluster analysis.
Notes: CityU Call Number: Q325.5 .L8 2011
vi, 79 leaves 30 cm.
Thesis (Ph.D.)--City University of Hong Kong, 2011.
Includes bibliographical references (leaves [71]-79)
Type: thesis
Abstract: In this thesis we investigate several algorithms in statistical learning theory, and our contributions consist of the following three parts. First we focus on the least square regularized regression learning algorithm in a setting of unbounded sampling. Our task is to establish learning rates by means of integral operators. By imposing a moment hypothesis on the unbounded sampling outputs and a function space condition associated with the marginal distribution, we derive learning rates which are consistent with those in the bounded sampling setting. Then we consider the spectral clustering algorithms by learning with a regularization scheme in a sample data hypothesis space and l1-regularizer. The data dependent space spanned by means of the kernel function provides great flexibility for learning. The main difficulty in studying spectral clustering in our setting is that the hypothesis space not only depends on a sample, but also depends on some constrained conditions. The technical difficultly is solved by a local polynomial reproduction formula and a construction method. The consistency of spectral clustering algorithms is stated in terms of properties of the data space, the underlying measure, the kernel as well as the regularity of a target function. Finally, we take a learning theory viewpoint to study a family of learning schemes for regression related to positive linear operators in approximation theory. Such a learning scheme is generated from a random sample by a kernel function parameterized by a scaling parameter. The essential difference between this algorithm and the classical approximation schemes is the randomness of the sampling points, which breaks the condition for good distribution of sampling points often required in approximation theory. We investigate efficiency of the learning algorithm in a regression setting and present learning rates stated in terms of the smoothness of the regression function, sizes of variances, and distances of kernel centers from regular grids. The error analysis is conducted by estimating the sample error and approximation error. Two examples with kernel functions related to continuous Bernstein bases and Jackson kernels are studied in detail and concrete learning rates are obtained.
Online Catalog Link: http://lib.cityu.edu.hk/record=b4086788
Appears in Collections:MA - Doctor of Philosophy

Files in This Item:

File Description SizeFormat
abstract.html132 BHTMLView/Open
fulltext.html132 BHTMLView/Open

Items in CityU IR are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0!
DSpace Software © 2013 CityU Library - Send feedback to Library Systems
Privacy Policy · Copyright · Disclaimer