You can also see the upcoming events on our Google Calendar.
|03/24/2017||12:00-1:00pm||Gross Hall 318||Yu Cheng||Computational Aspects of Optimal Information Revelation||info|
Computational Aspects of Optimal Information Revelation
Understanding the role of information in strategic interactions is becoming more and more important in the age of information we live in today. The design of information structures is emerging as a new area of mechanism design for information, an area that is still largely unexplored. In this talk, we are going to examine the optimization problem faced by an informed principal, who must choose how to reveal partial information in order to induce a desirable equilibrium, a task often referred to as signaling or persuasion. We settle the computational complexity of optimal signaling in two of the most fundamental game-theoretic scenarios: Bayesian zero-sum games and network routing games.
|03/03/2017||12:00-1:00pm||North 311||Simina Branzei||Computational Fair Division and Mechanism Design||info|
Computational Fair Division and Mechanism Design
The emergence of online platforms has brought about a fundamental shift in economic thinking: the design of economic systems is now a problem that computer science can tackle. For the first time we are able to move from the study of economic systems as mainly natural systems to carefully designing and executing them on a computer. Prominent examples of digital market mechanisms include auctions for ads (run by companies such as Google) and electromagneti spectrum (used by the US government). I will discuss several recent developments in fair division and mechanism design. I will start with a dictatorship theorem for fair division (cake cutting), showing that requiring truthfulness gives rise to a dictator. Next, I will turn to the paradigmatic model of multi-unit markets, where I will show that truthfulness requires discarding resources, but this problem can be alleviated when the market is even mildly competitive. More generally, I will discuss the theme of simplicity and complexity in mechanism design, and the interplay between economics and computation and learning.
|02/24/2017||12:00-1:00pm||Gross Hall 318||Jacob Leshno||The Simple Structure of Top Trading Cycles in School Choice: A Continuum Model||info|
The Simple Structure of Top Trading Cycles in School Choice: A Continuum Model
(Joint with Irene Lo) Many cities determine the assignment of students to schools through a school choice mechanism which calculates an assignment based on student preferences and school priorities. The prominent Top Trading Cycles (TTC) algorithm is strategy-proof and Pareto efficient, but the combinatorial description of TTC makes it non-transparent to parents and difficult to analyze for designers. We give a tractable characterization of the TTC mechanism for school choice: the TTC assignment can be simply described by n^2 cutoffs, where n is the number of schools. We define TTC in a continuum model, in which these thresholds can be directly calculated by a differential equation. The model also allows us to compute comparative statics, analyze sequential trade procedures, and show that changes in the priority structure can have non-trivial indirect effects on the allocation. We also apply this model to solve for optimal investment in school quality, and help explain empirical findings about the relation between the TTC assignment and the Deferred Acceptance assignment. To validate the continuum model we show that it gives a good approximation for strongly converging economies. Our analysis draws on an interesting connection between continuous trading procedures and continuous time Markov chains.
|12/02/2016||12:00-1:00pm||Gross Hall 318||John Dickerson||Better Matching Markets via Optimization||info|
Better Matching Markets via Optimization
The exchange of indivisible goods without money addresses a variety of constrained economic settings where a medium of exchange--such as money--is considered inappropriate. Participants are either matched directly with another participant or, in more complex domains, in barter cycles and chains with other participants before exchanging their endowed goods. We show that techniques from computer science and operations research, combined with the recent availability of massive data and inexpensive computing, can guide the design of such matching markets and enable the markets by running them in the real world. A key application domain for our work is kidney exchange, an organized market where patients with end-stage renal failure swap willing but incompatible donors. We present new midels that address three fundamental dimensions of kidney exchange: (i) uncertainty over the existence of possible trades, (ii) balancing efficiency and fairness, and (iii) inherent dynamism. For each dimension, we design scalable branch-and-price-based integer programming market clearing methods. Next, we combine these dimensions, along with high-level human-provided guidance, into a unified framework for learning to match in a general dynamic setting. This framework, which we coin FutureMatch, takes as input a high-level objective (e.g., "maximize graft survival of transplants over time") decided on by experts, then automatically learns based on data how to make this objective concrete and learns the "means" to accomplish this goal--a task that, in our experience, humans handle poorly.
|10/28/2016||12:00-1:00pm||LSRC D344||Balaji Srinivasan||How Bitcoin enables a Machine-Payable Web||info|
How Bitcoin enables a Machine-Payable Web
First, we had the World Wide Web, a web of links between documents. Then we had the Social Web, a social network of relationships between people. We believe the third web will be the Machine-Payable Web, where each node in the network is a machine and each edge is a micropayment between machines. Towards this end, we've developed open source software called 21 that makes it simple to perform micropayments over HTTP. The software allows you to get digital currency onto any machine headlessly, set up web services that accept and transmit bitcoin over HTTP, and discover other machines with similar services to autonomously trade with. The overall effect is to turn digital currency into the ultimate scarce system resource, on par with CPU, RAM, and hard drive space. Just as one can create a database index that consumes disk space to save time, we show that one can now write code that instead spends digital currency to outsource a computation to save time. To illustrate the breadth of the implications, we conclude with several working examples: bitcoin-aware intelligent agents, APIs that implement autonomous surge pricing, and the development of a native market datastructure as a first class alternative to the well known queue. We ask that audience members bring their laptops to code along the speaker!
|08/26/2016||1:30-2:30pm||Gross 318||Neeldhara Misra||Elicitation for Preferences Single Peaked on Trees||info|
Elicitation for Preferences Single Peaked on Trees
This talk will focus on the problem of preference elicitation, where the goal is to understand the preferences of agents (which we model by total orders) by querying them about their pairwise preferences. We will survey known results, which have studied the problem both on general domains and structured ones, such as the domain of single-peaked preferences. As one might expect, structured domains admit a lower query complexity. We will consider domains that are single peaked over trees, which generalize the notion of single-peakedness. We will see that the query complexity is influenced by the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We will also consider the query complexity for finding a weak Condorcet winner for this domain.
|06/01/2016||12-1pm||Gross 304B||Emmanouil (Manolis) Pountourakis||Procrastination with Variable Present Bias||info|
Procrastination with Variable Present Bias
Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution.Following Kleinberg and Oren (2014), we use a weighted task graph to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path.
|05/31/2016||12-1pm||Gross 304B||Michael Albert||Mechanism Design with Correlated Distributions: Limited Correlation and Uncertain Distributions||info|
Mechanism Design with Correlated Distributions: Limited Correlation and Uncertain Distributions
The standard assumption in the mechanism design literature is that the agents valuations over the outcome are independently distributed. However, in many situations, it is natural to assume that agents valuations are correlated. When valuations are correlated, many of the standard negative results in mechanism design do not apply. Notably, in a monopolistic auction with n bidders, the seller can extract the full social surplus as revenue, given certain assumptions about the correlation structure. However, existing results have two limitations: 1) They provide sufficient, but not necessary conditions for full surplus extraction, and 2) they require perfect knowledge of the distribution over agents' valuations. In this talk, I will present research where we provide necessary and sufficient conditions for full surplus extraction under correlated valuations for the case of a monopolistic seller selling one good. Further, we extend the concept of incentive compatibility and individual rationality to encompass uncertainty over the distribution of agents' valuations and present a polynomial time algorithm in the agent types for designing these mechanisms. Finally, we demonstrate experimentally that these "robust" mechanisms can significantly outperform other mechanism design paradigms when the true distribution is estimated.
|04/08/2016||12-1pm||North 311||Brandon Thomas Fain||The Core of The Participatory Budgeting Problem||info|
The Core of The Participatory Budgeting Problem
In participatory budgeting, communities collectively decide on the allocation of public tax dollars for local public projects. In this work, we consider the question of aggregating the preferences of community members into an actionable project funding plan. We model this problem as a Fair Knapsack Problem, where the central body fairly allocates public goods according to preferences reported by the community members, subject to a budget constraint. We argue that the classic game theoretic notion of core captures fairness in the setting and others that can be modeled as a fair knapsack problem, including fair cache sharing as a specific example. In this talk, I will show how we can compute the core by solving for a public goods market equilibrium called the Lindahl Equilibrium. Through a transformation of the equilibrium conditions, I will show that when the utility functions of the users are separable across items and satisfy other mild conditions, the core corresponds to a concave potential game on the items. This implies efficient computation and generalizes the well-known concept of proportional fairness. I will briefly discuss how we can use our characterization to implement an approximately core approximately truthful mechanism based on the exponential mechanism from differential privacy for the case of linear utility functions. Finally, I will discuss experimental results on real participatory budgeting voting data. In particular, empirical results suggest the core can be efficiently computed by best-response dynamics even when the underlying potential game on the items is not concave. I will also use our experiments discuss the relation of the core with other preference aggregation schemes.
|04/01/2016||12-1pm||Gross Hall 318||Siddhartha Banerjee||Dynamic Pricing in Rideshare Platforms||info|
Dynamic Pricing in Rideshare Platforms
Ride-sharing platforms such as Lyft and Uber are among the fastest growing online marketplaces. A key feature of these platforms is the implementation of fine-grained, fast-timescale dynamic pricing -- where prices can react to the instantaneous system state, and across very small geographic areas. In this talk, we will explore the value of such fast-timescale dynamic pricing, using a queueing model for ride-share platforms, which combines the stochastic dynamics of the platform's operations with strategic models of both passenger and driver behavior. Using this model, we will show that dynamic pricing is not better than the optimal static price in most settings. However, finding the optimal static price requires exact knowledge of system parameters; we will also show that dynamic pricing is much more robust to fluctuations in these parameters as compared to static prices. Thus, our work suggests that dynamic pricing does not necessarily buy more than static pricing, but rather, it lets rideshare platforms realize the benefits of optimal static pricing with imperfect knowledge of system parameters.
|03/25/2016||12-1pm||Gross Hall 318||Ruta Mehta||A Market Model for Scheduling, with Applications to Cloud Computing||info|
A Market Model for Scheduling, with Applications to Cloud Computing
We present a market for allocating and scheduling resources to agents who have specified budgets and need to complete specific tasks. Our market model is motivated by applications in cloud computing. Two important aspects required in this market are: (1) agents need specific amounts of each resource to complete their tasks, and (2) agents would like to complete their tasks as soon as possible. In incorporating these aspects, we arrive at a model that deviates substantially from market models studied so far in economics and theoretical computer science. Indeed, all known techniques developed to compute equilibria in markets in the last decade and half seem not to apply here. We give a polynomial time algorithm for computing an equilibrium using a new technique that is somewhat reminiscent of the ironing procedure used in the characterization of optimal auctions by Myerson. This is despite of the fact that the set of equilibrium prices could be non-convex; in fact it could have “holes”. Additionally we show that the market satisfies first welfare theorem. We also show that our market mechanism is incentive compatible when the agents only care about their flow time. A small modification to the payments, keeping the allocation the same, makes the entire mechanism incentive compatible for the setting in which agents want to first minimize flow time and subject to that, minimize their payments. This is complemented by showing impossibility of an incentive compatible mechanism in the quasi-linear model.
|03/01/2016||11:45am||Soc Sci Building 113||Mike Wellman||Empirical Game-Theoretic Methods for Analysis of Market Microstructure||info|
Empirical Game-Theoretic Methods for Analysis of Market Microstructure
Financial markets are complex strategic environments, populated by human and algorithmic actors, with heterogeneous and dynamically changing objectives and information, interacting asynchronously through distributed market mechanisms. Details of market microstructure may be pivotal, yet high-fidelity models resist definitive game-theoretic solution. For the past several years my research group has been developing an empirical approach, where data from agent-based simulation produces game models amenable for principled strategic reasoning. We have applied this methodology to a variety of market scenarios in finance and e-commerce, as well as other strategic domains. In this talk I illustrate the approach for welfare analysis in continuous limit order markets, focusing on the effects of strategic shading and market making on allocative efficiency.
You can find more past talks here.