City University of Hong Kong

CityU Institutional Repository >
3_CityU Electronic Theses and Dissertations >
ETD - Dept. of Mathematics  >
MA - Doctor of Philosophy  >

Please use this identifier to cite or link to this item:

Title: Learning with kernel based regularization schemes
Other Titles: Ji yu he de zheng ze hua xue xi suan fa
Authors: Xiao, Quanwu (肖銓武)
Department: Department of Mathematics
Degree: Doctor of Philosophy
Issue Date: 2009
Publisher: City University of Hong Kong
Subjects: Machine learning.
Kernel functions.
Notes: CityU Call Number: Q325.5 .X53 2009
v, 81 leaves 30 cm.
Thesis (Ph.D.)--City University of Hong Kong, 2009.
Includes bibliographical references (leaves [73]-81)
Type: thesis
Abstract: Learning theory is the mathematical foundation for machine learning algorithms which have important applications in many areas of science and technology. In this thesis, we mainly consider learning algorithms involving kernel based regularization schemes. While algorithms like support vector machine given by Tikhonov regularization schemes associated with convex loss functions and reproducing kernel Hilbert spaces have been extensively studied in the literature, we introduce some non-standard settings and provide insightful analysis for them. Firstly, we study a regression algorithm with `1 regularizer stated in a hypothesis space trained from data or samples by a nonsymmetric kernel. The data dependent nature of the algorithm leads to an extra error term called hypothesis error, which is essentially different from regularization schemes with data independent hypothesis spaces. By dealing with regularization error, sample error and hypothesis error, we estimate the total error in terms of properties of the kernel, the input space, the marginal distribution, and the regression function of the regression problem. Learning rates are derived by choosing suitable values of the regularization parameter. An improved error decomposition approach is used in our data dependent setting. Secondly, we consider the binary classification problem by learning from samples drawn from a non-identical sequence of probability measures. Our main goal is to provide satisfactory estimates for the excess misclassification error of the produced classifiers. Similar results can be obtained for multi-class classification because we give a comparison theory for error analysis of multi-class classifiers. Finally, we consider the sparsity issue for `1 regularization schemes. This topic has attracted a lot of attention recently and we give some discussion for further study on both theoretical and practical aspects.
Online Catalog Link:
Appears in Collections:MA - Doctor of Philosophy

Files in This Item:

File Description SizeFormat
abstract.html132 BHTMLView/Open
fulltext.html132 BHTMLView/Open

Items in CityU IR are protected by copyright, with all rights reserved, unless otherwise indicated.


Valid XHTML 1.0!
DSpace Software © 2013 CityU Library - Send feedback to Library Systems
Privacy Policy · Copyright · Disclaimer