DSpace Collection:
http://dspace.cityu.edu.hk:80/handle/2031/769
2014-07-25T08:59:35ZIntegrated studies on job shop processing and shipment planning
http://dspace.cityu.edu.hk:80/handle/2031/7212
Title: Integrated studies on job shop processing and shipment planning
Authors: Zhang, Zheng (張政)
Abstract: In supply chain management, several material and product flows exist with different features and different directions. Assembly parts can be manufactured at different factories, and shipped to the assembly factory for processing. Finished products are then shipped to the retailers or customers by logistics companies. In the current research, a job is defined as a sequence of manufacturing and logistics activities. A manufacturing activity performs some specific engineering tasks in the production process. Logistics activities generally refer to the physical movement of materials and products and warehousing in the shipping process. In practice, decision makers often manage a portfolio of jobs or product lines in the company. The arrangement of manufacturing activities is considered in “job shop processing”. The objective is to determine the machine assignment to complete job activities. In the shipping process, the management of logistics activities is studied in “shipment planning”. Nowadays, a manufacturing job may be processed in several locations. The research on job shop processing and shipment planning should be integrated. Extensive works have been conducted on job shop scheduling and shipment planning respectively. However, the existing studies mostly ignore the linkages between job shop processing and shipment planning. The current thesis develops several models to study the integration of job shop processing and shipment planning. Managerial issues that affect job processing cost are addressed.
The first study considers a shipment planning problem that includes both forward and reverse logistics jobs and the identification of reverse logistics centers. The inclusion of reverse logistics jobs introduces manufacturing process activities and transport activities in different directions. In the current study, the cost-saving factors by combining forward and reverse logistics activities are considered in the management of the job portfolio. The site selection of reverse logistics centers focuses on cost saving in the grouping of job processing activities in the same location. The reduction in total job processing cost can be achieved from the compatibility of and integration among forward and reverse logistics jobs.
The second study investigates a job shop processing problem in which the processing activity locations are to be determined. We consider a job shop processing problem in which the machines can be set up in several factories in different locations. Thus, the shipping of materials and parts in job processing is a consideration in cost saving. The problem aims to determine the machine set up locations and the machine assignment of job activities to minimize the total job processing cost. A nonlinear integer-programming model is proposed to solve this job shop problem; managerial issues are also discussed.
The current thesis contributes to the integration study of job shop processing and shipment planning. The research provides useful tools for practitioners in the supply chain management who manage several product lines that include both job processing and shipment activities.
Notes: CityU Call Number: HD38.5 .Z435 2012; ix, 168 leaves 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves 152-168)2012-01-01T00:00:00ZContracts and sensitivity analysis in a retailer dominant supply chain system under symmetric and asymmetric information
http://dspace.cityu.edu.hk:80/handle/2031/7197
Title: Contracts and sensitivity analysis in a retailer dominant supply chain system under symmetric and asymmetric information
Authors: Su, Chang (蘇暢)
Abstract: This thesis investigates different scenarios of a two-echelon supply chain in which a manufacturer wholesales a product to a Stackelberg dominant retailer who delivers the product to end customers. In a retailer dominant situation, the retailer can have centralized control of the chain and take the lead to draw the contract terms such as ordering price and policies. But this may lead to some unexpected dissensions. For example, the dominated manufacturer may not be inclined to participate in the deal if its minimal profit is not guaranteed. Such dissensions could be more serious when the retailer does not have accurate or full information of the manufacturer's production cost. In reality, we have seen that the growing dominance of many large retailers has altered the traditional channel behavior and mechanism. In literature, however, few studies have devoted to this area. This study aims to contribute to providing a better understanding of the contracting problem in a retailer dominant supply chain by investigating how a dominant retailer should design a purchase contract under different scenarios.
This thesis examines, from a dominant retailer's perspective, the above-mentioned contract dispute problem under both symmetric and asymmetric information environments, taking into consideration the manufacturer's nonparticipation. We propose several effective contract schemes to maximize the retailer's profit while guaranteeing the manufacturer the least profit it requires.
Firstly, in a price-sensitive newsvendor model with symmetric information, a linear price-discount sharing (PDS) contract and a novel integrative return policy is developed. Second order and unsold products' return is permitted in our assumption. Instead of assigning the right to all pricing decisions to the dominant retailer, wholesale price is negotiated between the two players, i.e. the retailer and the manufacturer, following the rule that one with more marketing power has larger bargaining power. The results show that it is better for a dominant retailer to leave some pricing discretion to the dominated manufacturer as well.
Secondly, we explore the issues in a similar model with asymmetric information, where the manufacturer has private knowledge of its production cost, while the retailer can only estimate manufacturing cost. By combining regular markup contract, revenue sharing contract and slotting fee contract with volume discount contracts, several workable contract schemes and some extensions are developed, based on the well-known menu of contracts (MC). We prove that the MC is the optimal form of volume discount contracts but it may not be practical in real implementation. The numerical results show that all these contract schemes perform well in both traditional and consignment situations.
Thirdly, to find out how the magnitude of asymmetric of information affects decisions and profits of supply chain players, we analyze sensitivity of profit of asymmetry of information (measured by the variance). Going by intuition, the informed player should always wish the other player to be as ignorant as possible and the uninformed player should always try to obtain more accurate forecasting of the asymmetric information. To some extent, the result turns out to be counter-intuitive in that the uninformed retailer's expected profit is a convex function of the accuracy level of asymmetry of information, and the informed manufacturer's real profit function is concave when he accepts the retailer's contract. In other words, if the dominant retailer is able to control or choose the accuracy level of forecasting information, its best choice can be at two extremes, i.e. either very high or very low. The manufacturer would not be willing to help the retailer improve forecasting accuracy unless forecasting by the retailer is poor (very low accuracy). This surprising result could be explained by the theory of information rent. We also compare our results with Corbett et al. (2004), who studied a similar problem in an upstream player dominant channel but achieved different results. The study shows that our investigation is robust in both retailer dominant and manufacturer dominant scenarios.
Notes: CityU Call Number: HD38.5 .S8 2012; viii, 112 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves 102-112)2012-01-01T00:00:00ZApplications of copula methods in financial risk management
http://dspace.cityu.edu.hk:80/handle/2031/7192
Title: Applications of copula methods in financial risk management
Authors: Lu, Xunfa (魯訓法)
Abstract: Financial risk management is playing an increasingly important role in helping individuals, financial institutions, and even countries avoid risks and achieve a secure investment environment. It is defined as a process of assessing and managing financial risks facing an investor by reducing exposure to identified risks. Measuring financial risks accurately and then making efficient investment decisions may provide an investor competitive advantages and considerable profits. The measurement of financial risks is actually constricted by real-life financial variables. However, abundant evidence shows that financial variables usually exhibit fat tails, skewness, and asymmetric dependence. These stylized features of financial variables challenge the traditional methods of financial risk management based on normally-distributed hypothesis in three aspects. First, the distribution of univariate variable cannot be sufficiently fitted by univariate normal distribution, or alternative elliptical distributions. Second, normal distribution of multivariate variables cannot capture their excess kurtosis and skewness despite simple tractability. Therefore, it can underestimate dependency risks of multivariate financial variables. Last, linear correlation, usually used to describe the dependency of different variables in traditional portfolio risk management, is also not enough when the joint distribution of different variables is non-elliptical. To solve these problems this dissertation resorts to a promising method based on copulas combined with GARCH and Realized Volatility models to investigate risks of multivariate financial variables.
The main achievements of this dissertation are threefold. Firstly, copulas combined with GARCH and Realized Volatility models are used to construct the multivariate distributions, and then to estimate portfolio risks in financial markets. The results show that models based on copulas to fit financial data perform better than the traditional models. Secondly, different marginal models, such as GARCH and Realized Volatility models, have significant effect on the portfolio Value at Risk. Finally, there exists significant skewness in marginal distribution, as well as in dependence structure. Therefore, the skewed Student-t distribution is better fitted to selected datasets than the normal or Student-t distribution.
Structurally, this dissertation is organized as follows. Chapter one emphasizes the importance of portfolio financial risk management and illustrates well-known methods of measuring financial risks - Value at Risk. Chapter 2 introduces the background knowledge of dependence and the theory of copulas. In the case of financial time series, the dissertation considers time-invariant and time-varying copula models. Parameter estimation and model selection of copulas are also explained in this chapter. Modeling of marginal distributions is presented in Chapter 3. GARCH and Realized Volatility models are fitted to marginal distributions of financial variables of interest. Chapter 4 illustrates ways to use the constructed model based on copulas to forecast Value at Risk by Monte Carlo simulation. To evaluate the performance of different constructed models, backtesting techniques are applied. Empirical results are detailedly presented in Chapter 4. Finally, conclusions and suggestions are outlined in Chapter 5.
Notes: CityU Call Number: HG4529.5 .L8 2012; viii, 97 leaves 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2012.; Includes bibliographical references (leaves 84-93)2012-01-01T00:00:00ZSupply chain network design under facility disruptions
http://dspace.cityu.edu.hk:80/handle/2031/6985
Title: Supply chain network design under facility disruptions
Authors: Peng, Peng (彭鵬)
Abstract: Key players in the supply chain, including manufacturers, retailers and distributors,
have realized the value of comprehensive network planning in which
they make detailed plans for constructing new facilities, expanding distribution
networks, partnering with new suppliers, and other important logistics activities.
During the design phase, many parameters in supply chain design problems are
assumed to be fixed. However, the impact of the design decisions spans over a
long horizon, during which many parameters such as costs, demands, and capacities
will fluctuate. Therefore, it will be dangerous to neglect data uncertainties, since a little change in data input may lead to solutions which are far from optimal
in the long run. Possible fluctuations and the reactive strategies have to be
taken into account in order to cope with the uncertain environment.
Many techniques have been derived to deal with optimization problems with
uncertainties, such as sensitivity analysis, stochastic programming methods, robust
optimization and so on. Sensitivity analysis procedures are usually tedious to
implement, while stochastic programming methods often lead to objective functions
that are hard to evaluate. The theoretical framework of robust optimization
has been well developed in recent years. It has received more attention in both
academy and industry. This thesis studies a variety of problems on designing
supply chain networks which are able to achieve well performance in uncertain
environment, with methods derived based on recent development in robust optimization
techniques.
Disruptions represent a form of uncertainty which usually causes capacity loss,
transportation blockage, price inflation and other fluctuations to supply chain networks.
In this thesis, we first study a strategic supply chain management problem
to design reliable networks that perform as well as possible under normal conditions,
while achieving relatively well performance when various forms of disruptions
strike. We present a mixed-integer programming model whose objective is
to minimize the nominal cost (the cost when no disruptions occur) while reducing
the disruption risk by applying the p-robustness criterion (which bounds the
cost in disruption scenarios). We demonstrate the tradeoff between the nominal
cost and system reliability, showing that substantial improvements in reliability
are often possible with minimal increases in cost. We also show that our model produces less conservative solutions than those generated by common robustness
measures. We propose a hybrid metaheuristic algorithm that is based on genetic
algorithms, local improvement, and the shortest augmenting path method
to solve the problem. Numerical tests show that the heuristic greatly outperforms
CPLEX in terms of solution speed, while still delivering excellent solution quality.
The disadvantage of p-robust approach is that it allows less complete description
of the scenario space, since the set of scenarios may grow exponentially large
as the problem size increases. On one hand, only small problems can be solved
due to limited computational power, which makes this model impractical in real
world application, where supply chain networks are usually consisted of hundreds
of facilities. On the other hand, conserving computational power by considering
only a small portion of the total number of scenarios would harm the accuracy of
our results. Traditional stochastic programs which are risk neutral in the sense
that they consider optimization of expected system-wide cost, are also difficult to
solve, since exact evaluation of the expected value is either impossible or prohibitively
expensive. To cope with this computational difficulty, we adopt a Monte
Carlo simulation based method called sample average approximation (SAA), to
solve a stochastic p-robust logistic network design problem in which we minimize
the expected total costs, while enforcing p-robust constraints for each scenario.
SAA approximates the expected objective function of the stochastic problem by
a sample average estimate derived from random samples. It usually results in
MIP counterpart problems and can be then solved by deterministic optimization
techniques. SAA not only approximates, but also produces confidence intervals on the problem's optimal objective values, which makes this method more attractive.
We propose a method based on SAA to solve the stochastic robust model.
Statistical lower and upper bounds are derived as well to evaluate the solution
quality. Numerical test results show that high quality solutions can be obtained
with reasonable computational efforts.
Notes: CityU Call Number: HD38.5 .P465 2011; xvi, 144 leaves : ill. 30 cm.; Thesis (Ph.D.)--City University of Hong Kong, 2011.; Includes bibliographical references (leaves 134-144)2011-01-01T00:00:00Z