Exploring the Intersection of Computer Science and Economics

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

11/13/2015 | 12-1pm | Gross Hall 318 | Sebastien Lahaie | Quadratic-Price Combinatorial Auctions | info |

Quadratic-Price Combinatorial Auctions I present an iterative combinatorial auction that offers modularity in the choice of price structure, drawing on ideas from convex optimization and machine learning. The auction can incorporate the standard item- or bundle-pricing schemes, but also any degree of polynomial prices in between, maintaining a sparse representation of the prices throughout. The design is motivated by a theoretical analysis which indicates that low-dimensional (e.g., item) prices lead to dramatically faster convergence than high-dimensional (e.g., bundle) prices, a phenomenon already observed in practice. An empirical evaluation reveals that quadratic price auctions are sufficient to clear the market across standard benchmarks, and remain competitive against state of the art ascending-price auctions in terms of efficiency and revenue. During the talk, I will highlight the similarities and differences between the problem of market clearing and classical learning problems like classification and regression. Joint work with Jake Abernethy, Matus Telgarsky, and Ben Lubin. | |||||

10/23/2015 | 12-1pm | Gross Hall 318 | Vibhanshu Abhishek | Examining the Impact of Contextual Ambiguity on Search Advertising Keyword Performance: A Topic Model Approach | info |

Examining the Impact of Contextual Ambiguity on Search Advertising Keyword Performance: A Topic Model Approach In this paper, we explore how the contextual ambiguity of a search can affect a keyword’s performance. The context of consumer search is often unobserved and the prediction of it can be nontrivial. Consumer search contexts may vary even when consumers are searching for the same keyword. Due to the ambiguity of a keyword, a large portion of the ads displayed may fall outside a particular consumer’s interest, potentially leading to low click-through rates on search engines. In our study, we propose an automatic way of examining keyword contextual ambiguity based on probabilistic topic models from machine learning and computational linguistics. We quantify the effect of contextual ambiguity on keyword click-through performance using a hierarchical Bayesian model that allows for topic-specific effect and nonlinear position effect. We validate our study using a novel dataset from a major search engine that contains information on consumer click activities for 12,790 distinct keywords across multiple product categories from over 4.6 million impressions. We find that consumer click behaviors vary significant across keywords, and keyword category and the contextual ambiguity of the keywords significantly affect such variation. Specifically, higher contextual ambiguity can lead to a higher click-through rate (CTR) on top-positioned ads, but the CTR tends to decay faster with position. Therefore, the overall effect of contextual ambiguity on CTR varies across positions. Our study has the potential to help advertisers design keyword portfolios and bidding strategy by extracting contextual ambiguity and other semantic characteristics of keywords based on large-scale analytics from unstructured data. It can also help search engines improve the quality of displayed ads in response to a consumer search query. | |||||

09/09/2015 | 12-1pm | Gross Hall 318 | Bo Waggoner | Low-Cost-Learning via Active Data Procurement | info |

Low-Cost-Learning via Active Data Procurement I'll talk about a problem of growing interest at the interface of theoretical machine learning and economics: purchasing data from strategic agents for a machine learning task such as classification or regression. The goal is to design a mechanism that purchases the data, subject to a budget constraint and incentive-compatibility, and prove machine learning guarantees (e.g. "regret" or "risk" bounds). I'll briefly overview literature in this area and present our model/results. We consider a tweak to the classic model of online "no regret" learning in which data points, arriving one by one, are each held by a strategic agent who must be compensated for providing it. A mechanism must post prices in order to purchase the data. Our mechanism interfaces with a broad class of learning algorithms and achieves low regret as a function of budget. These results also imply learning guarantees for the "statistical" setting in which data points are drawn from a distribution (rather than being adversarial/worst case). We'll cover some of the interesting technical ideas, but the talk will be accessible to all backgrounds (I promise!). The paper is available at http://arxiv.org/abs/1502.05774 and is joint work with Jake Abernethy, Yiling Chen, and CJ Ho. | |||||

07/22/2015 | 11am-12 | Gross Hall 304-B | Sam Ganzfried | Computing Strong Game-Theoretic Strategies in Large Games | info |

Computing Strong Game-Theoretic Strategies in Large Games Important problems in nearly all disciplines and on nearly all application domains involve multiple agents behaving strategically. Examples include deploying officers to protect ports, determining optimal thresholds to protect against phishing attacks, and finding robust policies for diabetes management. Such problems are modeled under the framework of game theory. In many important games there is information that is private to only some agents and not available to other agents -- for instance, in auctions each bidder may know his own valuation and only know the distribution from which other agents' valuations are drawn. My research designs, analyzes, and implements new approaches for agents in large imperfect-information games. There are several major challenges that must be confronted. First, standard solution concepts such as Nash equilibrium lack theoretical justification in certain classes (e.g., games with more than two players). Second, computing these concepts is difficult in certain classes from a complexity-theoretic perspective. Third, computing these concepts is difficult in practice for many interesting games even for cases when they are well-motivated and polynomial-time algorithms exist (e.g., two player zero-sum (competitive) games), due to enormous state spaces. And fourth, for all game classes, it is not clear if the goal should be to approximate a Nash equilibrium; one could achieve significantly higher payoff by learning to exploit opponents’ mistakes. However, such exploitation must be done in a way that does not open oneself up to being exploited in turn by strong deceptive opponents. While the approaches I have developed are domain independent, much of my research has been motivated by and applied to the domain of poker. Poker has emerged as a major AI challenge problem. Poker is not simply a toy game; it is tremendously popular for humans, and online poker is a multi-billion dollar industry. For the past ten years, there has been a competition between the strongest computer poker agents held annually at the top AI conference. The version of two-player no-limit Texas hold 'em played in the competition has approximately 10^165 states in its game tree. As part of my research, I created an agent for two-player no-limit Texas hold 'em that is currently the strongest agent in the world: it beat each opposing agent with statistical significance in the 2014 AAAI computer poker competition. An improved version called Claudico also competed against the strongest human specialists in 2015. | |||||

07/22/2015 | 12-1pm | Gross Hall 304-B | Lance Fortnow | Bounding Rationality by Computational Complexity | info |

Bounding Rationality by Computational Complexity Traditional microeconomic theory treats individuals and institutions of completely understanding the consequences of their decisions given the information they have available. These assumptions may not be valid as we might have to solve hard computational problems to optimize our choices. What happens if we restrict the computational power of economic agents? There has been some work in economics treating computation as a fixed cost or simply considering the size of a program. This talk will explore a new direction bringing the rich tools of computational complexity into economic models, a tricky prospect where even basic concepts like "input size” are not well defined. We show how to incorporate computational complexity into a number of economic models including game theory, prediction markets, forecast testing, preference revelation and awareness. This talk will not assume any background in either economics or computational complexity. | |||||

07/20/2015 | 12-1pm | Gross Hall 304-B | Darrell Hoy | Price of Anarchy Bounds for Theoretical and Empirical Auctions | info |

Price of Anarchy Bounds for Theoretical and Empirical Auctions Analyzing auctions can be challenging in asymmetric settings: for instance, Vickrey [1961] left the analytical characterization of equilibrium in the two bidder first price auction with uniform distributions as an open question; equilibrium remained uncharacterized until 2012. I will discuss a framework for analyzing welfare and revenue in auctions without needing to solve for equilibrium, via Price of Anarchy analysis. The talk will begin with a theoretical framework that reduces the problem of generating welfare and revenue bounds to analyzing one property of the rules of an auction: revenue covering. This approach gives a simple proof for welfare guarantees, and enables the first revenue approximation results for the first-price and all-pay auctions in asymmetric settings. We will then leverage this theoretical framework for empirical pursuits with a key simple observation: we can estimate the revenue covering of an auction from data. We apply this to advertising auctions on BingAds to get robust empirical welfare guarantees. | |||||

07/17/2015 | 12-1pm | Gross Hall 304-B | Nisarg Shah | Leximin Allocations in the Real World | info |

Leximin Allocations in the Real World As part of a collaboration with a major California school district, we study the problem of fairly allocating unused classrooms in public schools to charter schools. Our approach revolves around the randomized leximin mechanism. Previous work showed that in a much simpler domain, the leximin mechanism is proportional, envy-free, efficient, and group strategyproof. We extend this result to a broad domain that not only subsumes our classroom allocation setting, but also a number of other settings studied in the literature. Consequently, our result generalizes various results from resource allocation, cake-cutting, and kidney exchange. We provide worst-case guarantees regarding the performance of the leximin mechanism with respect to two natural quantitative objectives. Our experiments, which are based on real data, show that the performance in practice is significantly better than the worst-case guarantees. Our nontrivial implementation of the leximin mechanism scales gracefully in terms of running time (even though the problem is intractable in theory). At the end of the talk, I will discuss issues related to the deployment of our approach. | |||||

05/15/2015 | 12-1pm | Gross Hall 304-B | Shaddin Dughmi | Algorithmic Bayesian Persuasion | info |

Algorithmic Bayesian Persuasion We consider the Bayesian Persuasion problem, as formalized by Kamenica and Gentzkow [27], from an algorithmic perspective in the presence of high dimensional and combinatorial uncertainty. Specifically, one player (the receiver ) must take one of a number of actions with a-priori unknown payoff; another player (the sender ) is privy to additional information regarding the payoffs of the various actions for both players. The sender can commit to revealing a noisy signal regarding the realization of the payoffs of various actions, and would like to do so as to maximize her own payoff in expectation assuming that the receiver rationally acts to maximize his own payoff. This models a number of natural strategic interactions, in domains as diverse as e-commerce, advertising, politics, law, security, finance, and others. When the payoffs of various actions follow a joint distribution (the common prior), the sender’s problem is nontrivial, and its complexity depends on the representation of the prior. Assuming a Bayesian receiver, we study the sender’s problem with an algorithmic and approximation lens. We show two results for the case in which the payoffs of different actions are i.i.d and given explicitly: a polynomial-time (exact) algorithmic solution, and a “simple” (1 − 1/e) approximation algorithm. Both results hinge on a “symmetry characterization” of the optimal signaling scheme. The former result also involves a connection to auction theory, and in particular to Border’s characterization of the space of feasible reduced-form auctions. For the latter result, our algorithm decouples the signaling problem for the different actions and signals independently for each. This decoupling drives a larger, conceptual point: collaborative persuasion by multiple parties (the senders) is a parallelizable task, requiring no coordination when actions are identical and independent and only an approximate solution is sought. We leave open the intriguing question of whether either of these two results extends to independent yet not necessarily identical payoff distributions. We then turn our attention to the general problem, and consider distributions over action payoffs given by a sampling oracle. Somewhat surprisingly, we show that even in this general setting the persuasion problem admits a fully polynomial-time approximation scheme (FPTAS) in a bi-criteria sense. As our main technical tool to prove this theorem, we study the algorithmics of a natural and abstract question on vector spaces, which we term dispersion testing: Given two distributions A and B supported on a finite-dimensional vector space, decide whether A is a mean-preserving spread of B, and if so compute the inverse of the associated spreading map. We employ tools from convex optimization and optimal transport theory to design an approximate test for the dispersion testing problem, and relate the persuasion problem to dispersion testing via a randomized Turing reduction employing the Ellipsoid method. | |||||

04/24/2015 | 12-1pm | Gross Hall 304-B | Mallesh Pai | Do Online Social Networks Increase Welfare? | info |

Do Online Social Networks Increase Welfare? We consider a strategic online social network that controls information flows between agents in a social learning setting. Agents on the network select among products of competing firms of unknown quality. The network sells advertising to firms. We consider display advertising, which is standard firm to consumer advertising, and social advertising, in which agents who purchased that firm's product are highlighted to their friends. We show that in equilibrium, information is unbiased relative to a setting with no advertising. However, the network reduces the information agents see about others' purchases, since this increases advertising revenue. Hence consumer welfare is lower than in the first best. | |||||

04/17/2015 | 12-1pm | Gross Hall 304-B | Jimmy Yang | Optimization in Online Advertising | info |

Optimization in Online Advertising As a fast growing business, online advertising has become the largest media channel since 2013. With the benefits of flexible and dynamic format, ability to track and measure user response, global coverage and real-time delivery, it creates increasingly higher value for advertisers and publishers. Online advertising is an ecosystem composed of users, publishers and advertisers, who closely interact with each other. Each party has their own goals participating in the ecosystem, where optimization plays a critical role to reach the goals. In this talk, we will first briefly review the characteristics and trend of online advertising. Then we will present some of the important optimization problems, from the perspectives of the publishers and advertisers respectively. In general, the publishers are interested in maximizing their revenue given available inventory of supply, while the advertisers are interested in maximizing their ROI (Return On Investment) given available budget. These problems are modeled in different ways and solved to provide both online and offline solutions that guide the business practice. The talk will be concluded with a summary and discussion of potential research opportunities. | |||||

04/10/2015 | 12-1pm | Gross Hall 304-B | Nikhil Devanur | Simple Auctions with Simple Strategies | info |

Simple Auctions with Simple Strategies We introduce single-bid auctions as a new format for combinatorial auctions. In single-bid auctions, each bidder submits a single real-valued bid for the right to buy items at a fixed price. Contrary to other simple auction formats, such as simultaneous or sequential single-item auctions, bidders can implement no-regret learning strategies for single-bid auctions in polynomial time. Price of anarchy bounds for correlated equilibria concepts in single-bid auctions therefore have more bite than their counterparts for auctions and equilibria for which learning is not known to be computationally tractable (or worse, known to be computationally intractable). To this end, we show that for any subadditive valuations the social welfare at equilibrium is an O(log m)-approximation to the optimal social welfare, where m is the number of items. We also provide tighter approximation results for several subclasses. Our welfare guarantees hold for Nash equilibria and no-regret learning outcomes in both Bayesian and complete information settings. | |||||

04/08/2015 | 3:30-4:30pm | Gross Hall 330 | Luis Ortiz (Stony Brook University) | On Networks and Behavior: Strategic Inference and Machine Learning | info |

On Networks and Behavior: Strategic Inference and Machine Learning Studying complex behavior in economic, social, or other similar systems is an important scientific endeavor with potentially direct impact to society via the eventual commercialization of relevant technology. The big-data revolution offers the opportunity to easily collect and process large amounts of data recording system behavior. Yet, our fundamental understanding of real-world complex systems remains slim at best. In this talk, I will summarize research from my group that takes a modern AI, machine learning, and engineering approach to questions about systems in domains where global behavior results from complex local interactions of agents embedded in a network. Our particular interest is interactions resulting from the distributed reasoning and the deliberate decisions of a large number of agents (e.g., a social network). In our work, we seek, and provide, algorithms that scale polynomially with the number of agents and thus can deal with relatively large systems. I will also illustrate our approach to causal strategic inference and to machine learning from strictly behavioral data in a real-world domain: the U.S. Congress. | |||||

04/03/2015 | 12-1pm | Gross Hall 304-B | Jens Witkowski | Robust Peer Prediction Mechanisms | info |

Robust Peer Prediction Mechanisms Peer prediction mechanisms are payment rules that seek to elicit private information from rational agents without the requirement that ground truth is eventually revealed. For example, a public health program may require participants to report whether they have ever used illegal drugs, and participants may lie about their drug abuse because of shame or concerns about not being eligible for the program. Whether or not participants have in fact used illegal drugs is private information that is never revealed, and the participants' reports can thus not be verified. The classical peer prediction method [Miller et al. 2005] provides an approach to these settings. It does, however, critically rely on the assumption that all participants share the same prior beliefs, and that the mechanism knows these beliefs. We relax these common knowledge assumptions. I will first present our Robust Bayesian Truth Serum (RBTS), which asks participants for two types of reports: the report of the private information it is interested in and a prediction report corresponding to a participant's belief about the private information of other participants. RBTS dispenses with the assumption that the participants' prior beliefs need to be known to the mechanism. RBTS does, however, still rely on the participants sharing the same prior beliefs. Our second contribution is the design of subjective-prior peer prediction mechanisms, which further reduce the assumption of common knowledge. As in RBTS, they do not require knowledge of the participants' prior beliefs. Moreover, they allow the participants' prior beliefs to be subjective, i.e. different from one another. | |||||

03/20/2015 | 12-1pm | Gross Hall 304-B | R. Ravi | Expertise in Online Markets | info |

Expertise in Online Markets We examine the effect of the presence of knowledgeable buyers (experts) in online markets where auctions with a hard close and posted prices are widely used. We model buyer expertise as the ability to accurately predict the quality, or condition, of an item. In auctions with a hard close, sniping--submitting bids in the last minute--emerges as an equilibrium strategy for experts. We show that non-experts bid more aggressively as the proportion of experts increases. As a consequence, we establish that the auction platform may obtain a higher revenue by (i) enforcing a hard close and allowing sniping, and (ii) withholding information regarding the quality of the item. Moreover, in online markets where both auctions and posted prices are available, we show that the presence of experts allows the sellers of high quality items to signal their quality by choosing to sell via auctions. Joint ongoing work with Stelios Despotakis (CMU), Isa Hafalir (CMU) and Amin Sayedi (U Washington). | |||||

03/06/2015 | 12-1pm | Gross Hall 304-B | Ashish Goel | Crowdsourced Democracy: Algorithms, Mechanisms, and Platforms | info |

Crowdsourced Democracy: Algorithms, Mechanisms, and Platforms YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But Washington has no such counterpart; there are no online alternatives for making democratic decisions at large scale as a society. As opposed to building consensus and compromise, public discussion boards often devolve into flame wars when dealing with contentious socio-political issues. In this talk, we will describe some of the challenges in and approaches for solving this broad problem. We will first describe an opinion formation process that leads to polarization. We will then describe our work on participatory budgeting and knapsack voting. All budget problems are knapsack problems at their heart, since the goal is to pack as much "societal value" into a "spending capacity". We will describe our experience with implementing knapsack voting in Chicago's 49th ward and Vallejo, and show protocols that are weakly strategy-proof and amenable to efficient aggregation. We will also show that comparison voting can elicit an approximate Condorcet winner efficiently when one exists, and describe our experience with comparison-based voting in an experiment in Finland. We will finally describe a mechanism called Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and strategy-proof in fairly general settings, whereas no pair-wise deliberation process can have the same properties. This is joint work with Tanja Aitamurto, Pranav Dandekar, Anilesh Krishnaswamy, Helene Landermore, David Lee, and Sukolsak Sakshuwong. | |||||

02/27/2015 | 12-1pm | Gross Hall 304-B | David Pennock | Designing Wagering Mechanisms | info |

Designing Wagering Mechanisms In one online contest, the average prediction of 2231 people beat all but 0.3% of individual players. This is a striking, yet typical, example of the wisdom of crowds. I will discuss the design of wagering mechanisms to elicit probabilities from as many experts as possible for as little cost as possible, to maximize accuracy. Dozens of mechanisms have been proposed, from the parimutuel betting system invented in 1867 to our latest No-Arbitrage Wagering Mechanism (2014). Taking an axiomatic approach, I discuss how NAWM is able to maintain many of the most desirable properties for wagering mechanisms, including incentive compatibility, individual rationality, sybilproofness, and weak budget balance, while in many settings actually earning a profit for the market institution. If I have time, I will talk about the Microsoft Prediction Lab, our effort to crowdsource literally billions or more combinatorial predictions. | |||||

02/18/2015 | 1:15-2:15pm | LSRC D344 | Renato Paes Leme | Gross Substitutes: old and new results | info |

Gross Substitutes: old and new results The concept of gross substitute valuations was rediscovered many times: it was introduced in Economics by Kelso and Crawford as a sufficient conditions for the existence of Walrasian equilibria in economies with indivisible goods. The same concept was also introduced independently in other communities with different names: M-concave functions in discrete convex analysis (Murota and Shioura), Matroidal and Well-Layered maps in combinatorial optimization by (Dress and Terhalle) and valuated matroids in matroid theory (Dress and Wenzel). We focus on algorithmic aspects of the various definitions and the connections that can be established by looking at the concept of Gross Substitute valuations from different angles. We also discuss a new result (joint with Michael Ostrovski) showing that gross substitutes is strictly larger then the class of Endowed Assignments valuations, closing an open problem proposed by Hatfield and Milgrom in their matching with contracts paper. | |||||

02/13/2015 | 12-1pm | Gross Hall 304-B | Francesca Rossi | Restricted manipulation in iterative voting: convergence, Condorcet efficieny, and Borda score | info |

Restricted manipulation in iterative voting: convergence, Condorcet efficieny, and Borda score When a group of agents needs to take a collective decision, e.g., to coordinate their action, a voting rule can be used to aggregate their individual preferences into a collective choice. In similar situations agents can act strategically and manipulate the election by misrepresenting their preferences and change the outcome in their favor. While manipulation is traditionally considered a bad behavior to be avoided, in this paper we show that this form of strategic reasoning can yield beneficial results in the setting of iterative voting. We study sequences of elections where at each step an individual is allowed to manipulate by modifying his ballot according to a set of strategic moves. We prove convergence of this process for several combinations of voting rules and strategic moves, and we present experiments that show that the winner after iteration is often a better candidate than the initial one. For all cases under exam, we also analyze the computational and axiomatic properties of the iterative process considered as a one-stage voting rule. | |||||

02/06/2015 | 12-1pm | Gross Hall 304-B | Thanh Nguyen | Near Feasible Stable Matchings with Complementarities | info |

Near Feasible Stable Matchings with Complementarities The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby' instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts. |

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

12/12/2014 | 12-1pm | Gross Hall 304-B | Bob Day | Combinatorial Auctions, Duality, and the Core | info |

Combinatorial Auctions, Duality, and the Core Combinatorial auctions allow bidders to bid on packages or combinations of items, providing a more expressive notion of bidding, and protecting bidders from certain exposure problems. The winner-determination problem faced by the bid-taker is typically NP-hard, and with several independent strategic agents, the design of combinatorial auctions has provided a rich environment for technical research in recent years. Combinatorial auctions and their closely related generalization, combinatorial exchanges (which have several buyers and several sellers), have design limitations based on a few well-known impossibility theorems. This means that most practical applications of these concepts involve a fundamental compromise, and I will discuss in detail one such practical compromise, the core-selecting combinatorial auction, which has been used in several recent spectrum auctions in Europe, Australia, and Canada. Future directions in this area involve the generation of meaningful dual prices to the winner-determination integer program, and I will discuss some of the relationships among auctions, duality, and the core. | |||||

11/21/2014 | 12-1pm | Gross Hall 304-B | Yaron Singer | Influence Maximization through Adaptive Seeding | info |

Influence Maximization through Adaptive Seeding Throughout the past decade there has been extensive research on the problem of influence maximization in social networks: how does one target a small subset of individuals who will trigger a large word-of-mouth cascade in the network? In this talk we will introduce a new paradigm for influence maximization, called adaptive seeding. The framework is designed to dramatically improve influence maximization by leveraging a remarkable structural phenomenon in social networks known as the "friendship paradox" (or "your friends have more friends than you"), and presents a novel optimization model. The optimization model may be of independent interest to those curious about stochastic optimization, submodular maximization, and machine learning. We will discuss key algorithmic ideas and some experimental results of adaptive seeding. | |||||

11/20/2014 | 12-1pm | LSRC D344 | Shuchi Chawla | Revenue maximization in interdependent value settings | info |

Revenue maximization in interdependent value settings A fundamental assumption underlying much of mechanism design is that buyers know their values for the products they are interested in. We consider settings where agents receive signals related to their values from a joint distribution, and their estimated values are functions of their own as well as others' signals. We consider revenue maximization in such settings and show that a variant of the VCG mechanism with admission control gives a constant approximation to the optimal expected revenue. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single- crossing property and a concavity-type condition. | |||||

11/14/2014 | 11:50-4:00pm | Gross Hall 330 | CS-Econ Retreat | info | |

CS-Econ Retreat 11.50-12:00 Get lunch boxes, chat, get seated. Posters should be set up by the end of this. 12:00-12:10 introduction (Vincent Conitzer): state of cs-econ@duke, goals of retreat 12:10-12:25 research overview: Vincent Conitzer 12:25-12:40 research overview: Rachel Kranton 12:40-12:55 research overview: Ozan Candogan 12:55-01:10 research overview: Debmalya Panigrahi 01:10-01:30 coffee break; posters 01:30-01:45 research overview: Kamesh Munagala 01:45-02:00 research overview: Giuseppe (Pino) Lopomo 02:00-02:15 research overview: Sasa Pekec 02:15-02:30 research overview: Santiago Balseiro 02:30-02:45 research overview: Ben Lee 02:45-03:05 coffee break; posters 03:05-04:00 group discussion: joint research, joint grant proposals, seminar series, workshops, MSEC program, future of the field, open problems, … | |||||

11/7/2014 | 12-1pm | Gross Hall 304-B | Matt Weinberg | Simple mechanisms for complex markets | info |

Simple mechanisms for complex markets We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and his value for a set of items is additive. The seller aims to maximize his revenue. It is known that an optimal mechanism in this setting may be quite complex, requiring randomization [HR12] and menus of infinite size [DDT13]. Hart and Nisan [HN12] have initiated a study of two very simple pricing schemes for this setting: item pricing, in which each item is priced at its monopoly reserve; and bundle pricing, in which the entire set of items is priced and sold as one bundle. Hart and Nisan [HN12] have shown that neither scheme can guarantee more than a vanishingly small fraction of the optimal revenue. In sharp contrast, we show that for any distributions, the better of item and bundle pricing is a 6-approximation to the optimal revenue. We further discuss extensions to multiple buyers and to valuations that are correlated across items. | |||||

10/30/2014 | 12-1pm | LSRC D344 | Markus Brill | Computing Dominance-Based Solution Concepts for Normal-Form Games | info |

Computing Dominance-Based Solution Concepts for Normal-Form Games Two common criticisms of Nash equilibrium are its dependence on very demanding epistemic assumptions and its computational intractability. We study the computational properties of less demanding set-valued solution concepts that are based on varying notions of dominance. These concepts are intuitively appealing, they always exist, and admit unique minimal solutions in important subclasses of games. Examples include Shapley's saddles, Harsanyi and Selten's primitive formations, Basu and Weibull's CURB sets, and Dutta and Laslier's minimal covering sets. We propose two generic algorithms for computing these concepts and investigate for which classes of games and which properties of the underlying dominance notion the algorithms are sound and efficient. | |||||

10/24/2014 | 12-1pm | Gross Hall 304-B | Ariel Procaccia | Computational Fair Division: From Cake Cutting to Cloud Computing | info |

Computational Fair Division: From Cake Cutting to Cloud Computing I will present an exciting new interaction between computer science and fair division theory. On the one hand, I will show how computational thinking provides a novel perspective on classic problems in fair division, such as cake cutting and estate division. On the other hand, I will explain how fair division theory is informing the allocation of computational resources. I will also talk about the integration of some of these theoretical ideas into Spliddit (http://www.spliddit.org), a not-for-profit fair division website that aims to make the world just a bit fairer. | |||||

10/10/2014 | 12-1pm | Gross Hall 304-B | Yash Kanoria | Unbalanced random matching markets: the stark effect of competition | info |

Unbalanced random matching markets: the stark effect of competition We study bilateral matching markets such as marriage markets, labor markets, and school/college admissions, that allow participants to form partnerships with each other for mutual benefit. We study competition in matching markets with random heterogeneous preferences. First, we find that such markets are extremely competitive. In unbalanced markets, each agent on the short side of the market is matched to one of his top preferences and each agent on the long side does almost no better than being matched to a random partner. Second, we find that even the slightest imbalance leads to competition that yields an essentially unique stable matching. Our results suggest that any matching market is likely to have a small core, explaining why empirically small cores are ubiquitous. | |||||

10/8/2014 | 3:30pm | Gross Hall 330 | Jenn Wortman Vaughan | Combinatorial Prediction Markets and Connections to Online Learning | info |

Combinatorial Prediction Markets and Connections to Online Learning | |||||

09/26/2014 | 12-1pm | Gross Hall 304-B | Elliot Anshelevich | Stable Matching, Friendship and Altruism | info |

Stable Matching, Friendship and Altruism We will discuss both integral and fractional versions of "correlated stable matching" problems. Each player is a node in a social network and strives to form a good match with a neighboring player; the player utilities from forming a match are correlated. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. We especially focus on networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions. When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly decreases the price of stability, while the price of anarchy remains unaffected. Furthermore, a good stable matching achieving the price of stability bound always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., "caring about your friends") can make an even larger difference, and greatly reduce the price of anarchy. Finally, we extend most of our results to network contribution games, in which players can decide how much effort to contribute to each incident edge, instead of simply choosing a single node to match with. | |||||

04/18/2014 | 12-1pm | Gross Hall 304-B | Asu Ozdaglar | Network Security and Contagion | info |

Network Security and Contagion This paper develops a theoretical model of investments in security in a network of interconnected agents. The network connections introduce the possibility of cascading failures depending on exogenous or endogenous attacks and the profile of security investments by the agents. The general presumption in the literature, based on intuitive arguments or analysis of symmetric networks, is that because security investments create positive externalities on other agents, there will be underinvestment in security. We show that this reasoning is incomplete because of a first-order economic force: security investments are also strategic substitutes. In a general (non-symmetric) network, this implies that underinvestment by some agents will encourage overinvestment by others. We demonstrate by means of examples that not only there will be overinvestment by some agents but also aggregate probabilities of infection can be lower in equilibrium than in the social optimum. We then provide sufficient conditions for underinvestment. This requires both sufficiently convex cost functions (just convexity is not enough) and networks that are either symmetric or locally tree-like (i.e., either trees or in the case of stochastic networks, without local cycles with high probability). We also characterize the impact of network structure on equilibrium and optimal investments. Finally, we show that when the attack location is endogenized (by assuming that the attacker chooses a probability distribution over the location of the attack in order to maximize damage), there is another reason for overinvestment: greater investment by an agent shifts the attack to other parts of the network. This is joint work with Daron Acemoglu and Azarakhsh Malekian. | |||||

04/11/2014 | 12-1pm | Gross Hall 304-B | Chris LaSala | TBA | info |

TBA TBA | |||||

03/20/2014 | 12-1pm | Gross Hall 304-B | Nicole Immorlica | Social Status and Badge Design | info |

Social Status and Badge Design Many websites rely on user-generated content to provide value to consumers. These websites often incentivize participation by awarding users badges based on their contributions. In this paper, we consider the design of badge mechanisms that maximize total contributions. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary and sufficient feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary and sufficient for approximate optimality. Joint work with Greg Stoddard and Vasilis Syrgkanis. | |||||

03/07/2014 | 12-1pm | Gross Hall 304-B | Ruggiero Cavallo | Efficient Auctions with Public Budget Constraints | info |

Efficient Auctions with Public Budget Constraints In most real-world auction settings, bidders do not have unlimited money to spend on acquiring the goods they seek. In this talk I'll address how to maximize efficiency when bidders are subject to arbitrary publicly-known budget constraints. I'll start by presenting the optimal budget-feasible auction among those that make allocation decisions deterministically as a function of bids; notably, maximizing expected efficiency requires sometimes not allocating the good to an agent that is willing and able to pay more than all others. Then, expanding our scope to randomized mechanisms, I will introduce an auction that is optimal among all those that make allocation decisions based on the rank of each bid. Finally, I'll describe a simple auction that converges to perfect efficiency as the population size gets large, showing that budgets do not pose a significant obstacle to allocative efficiency in settings with lots of bidders. Bio: Ruggiero Cavallo is a research scientist at Yahoo Labs in New York City. He got a BA from Cornell (2001) and a PhD from Harvard (2008) in computer science, the latter under the advisorship of David Parkes. Giro's main research area is mechanism design, though his interests are diverse. He lives in Brooklyn with his wife and daughter. | |||||

02/21/2014 | 12-1pm | Gross Hall 304-B | Pablo Parrilo | Rank/Sparsity Minimization and Latent Variable Graphical Model Selection | info |

Rank/Sparsity Minimization and Latent Variable Graphical Model Selection Suppose we have a Gaussian graphical model with sample observations of only a subset of the variables. Can we separate the extra correlations induced due to marginalization over the unobserved, hidden variables from the structure among the observed variables? In other words, is it still possible to consistently perform model selection despite the latent variables? As we shall see, the key issue here is to decompose the concentration matrix of the observed variables into a sparse matrix (representing graphical model structure among the observed variables) and a low-rank matrix (representing the effects of marginalization over the hidden variables). This estimator is given by a tractable convex program, and it consistently estimates model structure in the high-dimensional regime in which the number of observed/hidden variables grow with the number of samples of the observed variables. In our analysis the algebraic varieties of sparse matrices and low-rank matrices play an important role. Based on joint work with Venkat Chandrasekaran (Caltech) and Alan Willsky (MIT). |

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

11/22/2013 | 12-1pm | Gross Hall 304-B | Ozan Candogan (Duke) | TBA | info |

TBA TBA | |||||

11/15/2013 | 12-1pm | Gross Hall 304-B | Santiago R. Balseiro (Duke) | TBA | info |

TBA TBA | |||||

11/08/2013 | 12-1pm | Gross Hall 304-B | Amin Sayedi (UNC) | TBA | info |

TBA TBA | |||||

10/25/2013 | 12-1pm | Gross Hall 304-B | Vincent Conitzer (Duke) | Tearing down the wall between Mechanism Design with and without money. | info |

Tearing down the wall between Mechanism Design with and without money. Many mechanism designers (algorithmic or other) draw a sharp line between mechanism design with money (auctions, exchanges, ...) and without money (social choice, matching, ...). I will discuss two papers that indicate that this line is blurrier than it seems. In the first, we study generalizations of the Vickrey auction to settings where a single agent wins, but with an arbitrary contract instead of a simple payment. In the second, we study repeated allocation of a good without payments. Here, we can create a type of artificial currency that affects future assignment of the good and that allows us to use modified versions of existing mechanisms with payments. Based on: B. Paul Harrenstein, Mathijs M. de Weerdt, and Vincent Conitzer. Strategy-Proof Contract Auctions and the Role of Ties. To appear in Games and Economic Behavior. Mingyu Guo, Vincent Conitzer, and Daniel Reeves. Competitive Repeated Allocation Without Payments. Short version appeared in Proceedings of the Fifth Workshop on Internet and Network Economics (WINE-09), pp. 244-255, Rome, Italy, 2009. | |||||

09/13/2013 | 12-1pm | Gross Hall 304-B | Chris Wilkens (Yahoo!) | Utility-Target Auctions | info |

Utility-Target Auctions The first-price auction is popular in practice for its simplicity and transparency. Moreover, its potential virtues grow in complex settings where incentive compatible auctions may generate little or no revenue. Unfortunately, generalizing the first-price auction has proven fragile in theory and practice. We show that the auctioneer's choice of bidding language is critical when generalizing beyond the single-item setting, and we propose a specific construction called the utility-target auction that performs well. The utility-target auction includes a bidder's final utility as an additional parameter, identifying the single dimension along which she wishes to compete. This auction is closely related to profit-target bidding in first-price and ascending proxy package auctions and gives strong performance guarantees for a variety of complex auction environments. We also take a dynamic approach to studying pay-your-bid auctions: rather than basing performance guarantees solely on static equilibria, we study the repeated setting and show that robust performance guarantees may be derived from simple axioms of bidder behavior. For example, as long as a loser raises her bid quickly, a standard first-price auction will generate at least as much revenue as a second-price auction. We generalize such ideas to complex pay-your-bid auctions through the utility-target auction: as long as losers do not wait too long to raise bids, a first-price auction will reach an envy-free state that implies a strong lower-bound on revenue; as long as winners occasionally experiment by lowering their bids, the outcome will near the boundary of this envy-free set so bidders do not overpay; and when players with the largest payoffs are the least patient, bids converge to the egalitarian equilibrium. Significantly, bidders need only know whether they are winning or losing in order to implement such behavior. Joint work with Darrell Hoy and Kamal Jain BIO:Chris Wilkens is a research scientist at Yahoo Labs. He joined Yahoo Labs in 2013 after finishing his PhD at the University of California, Berkeley, where he was advised by Christos Papadimitriou. His research is focused at the interface between economics and computer science, understanding how ideas and tools from computer science can be applied to both theoretical and real-world economic problems. Prior to Berkeley, he studied Computer Science at the Massachusetts Institute of Technology. | |||||

2/20/2013 | 12-1pm | North Bldg 311 | Taiki Todo | False-name-proof matching | info |

False-name-proof matching Matching a set of agents to a set of objects has many real applications. One well-studied framework is that of priority-based matching, in which each object is assumed to have a priority order over the agents. The Deferred Acceptance (DA) and Top-Trading-Cycle (TTC) mechanisms are the best-known strategy-proof mechanisms. However, in highly anonymous environments, the set of agents is not known a priori, and it is more natural for objects to instead have priorities over characteristics (e.g., the student's GPA or home address). In this paper, we extend the model so that each agent reports not only its preferences over objects, but also its characteristic. We derive results for various notions of strategy-proofness and false-name-proofness, corresponding to whether agents can only report weaker characteristics or also incomparable or stronger ones, and whether agents can only claim objects allocated to their true accounts or also those allocated to their fake accounts. Among other results, we show that DA and TTC satisfy a weak version of false-name-proofness. Furthermore, DA also satisfies a strong version of false-name-proofness, while TTC fails to satisfy it without an acyclicity assumption on apriorities. | |||||

3/22/2013 | 12-1pm | D344 LSRC | Yiling Chen | Information Elicitation Sans Verification | info |

Information Elicitation Sans Verification The recent advent of human computation -- employing groups of non-experts to solve problems -- has motivated study of a question in mechanism design: How do we elicit useful information when we are unable to verify reports? Existing methods, such as peer prediction and Bayesian truth serum, require assumptions either on the mechanism's knowledge about the participants or on the information structure of the participants in order to elicit private information. Meanwhile, however, there are simple methods such as the ESP Game that seem to require no such assumptions, yet have achieved success in practice. We attack this paradox from two directions. First, we give a broad formalization of the problem of information elicitation without verification and provide impossibility results for the general setting. Primarily, we show that, without assumptions on designer knowledge or participants' information, no mechanism in this setting is truthful. Second, we define and analyze the output agreement class of mechanisms, an extremely broad but simple class in which players are rewarded based on the metric distance between their reports. Output agreement makes no assumptions on designer knowledge or participants' information and thus cannot be "truthful". We resolve the paradox by showing that it is nonetheless useful: It elicits the correct answer according to the common knowledge among the players. This talk is based on joint work with Bo Waggoner. Yiling Chen is an Associate Professor of Computer Science at Harvard University. She received her Ph.D. in Information Sciences and Technology from the Pennsylvania State University. Prior to working at Harvard, she spent two years at the Microeconomic and Social Systems group of Yahoo! Research in New York City. Her current research focuses on topics in the intersection of computer science and economics. She is interested in designing and analyzing social computing systems according to both computational and economic objectives. Chen received an ACM EC Outstanding Paper Award, an AAMAS Best Paper Award, and an NSF Career award, and was selected by IEEE Intelligent Systems as one of "AI's 10 to Watch" in 2011. | |||||

3/29/2013 | 2pm | Soc Phych 126 | Herve Moulin | Loss Calibrated Methods for Bipartite Rationing Problems | info |

Loss Calibrated Methods for Bipartite Rationing Problems The standard problem of rationing a single overdemanded commodity has a natural bipartite extension with multiple types of a one-dimensional commodity (e.g., stored in different locations), and each agent can only consume some types of the resource (e.g., has only access to a subset of locations). We define the new standard loss calibrated rationing methods, that equalize across agents the ratio of shares to (calibrated) losses (demand minus share). We extend them to bipartite methods that 1) are not affected by the elimination of an edge and the corresponding flow (Consistency), and 2) treat resource types with identical connectivity as a single type. They are essentially the only standard methods with a bipartite extension meeting the two properties above. Most of the parametric methods discussed in the literature do not admit such extension. (Joint work with JAY SETHURAMAN, Columbia University.)
Most but not all of the talk will be based on the
attached paper (click to download). Herve Moulin is George A. Peterkin Professor of Economics at Rice University. | |||||

4/5/2013 | 12-1pm | D344 LSRC | Costis Daskalakis | Revenue Optimization in Multi-Item Auctions | info |

Revenue Optimization in Multi-Item Auctions In his seminal paper, Myerson [1981] provides a revenue-optimal auction for a seller who is looking to sell a single item to multiple bidders. Extending this auction to simultaneously selling multiple heterogeneous items has been one of the central problems in Mathematical Economics. We provide such an extension that is also computationally efficient, using a combinatorial optimization approach. Bio: Constantinos Daskalakis is the x-window consortium associate professor of computer science at MIT. He holds a diploma in electrical and computer engineering from the National Technical University of Athens, and a Ph.D. in electrical engineering and computer sciences from UC-Berkeley. His research interests lie in theoretical computer science and applied probability with a focus on the computational aspects of the Internet, online markets and social networks. Daskalakis has been honored with the 2007 Microsoft Graduate Research Fellowship, the 2008 ACM Doctoral Dissertation Award, the Game Theory and Computer Science Prize from the Game Theory Society, the 2010 Sloan Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, the 2011 Ruth and Joel Spira Award for Distinguished Teaching, and the 2012 Microsoft Research Faculty Fellowship. | |||||

4/12/2013 | 12-1pm | TBD | Aaron Kolb | Waiting to Attack | info |

Waiting to Attack A player of privately known type chooses when to attack another. Information about the potential aggressor's type is revealed publicly according to an exogenous news process and the fact that attack has not yet occurred. I analyze Markov Perfect Equilibria (MPE) using the belief as a state variable. No MPE in pure strategies exist. I propose an intuitive refinement under which all MPE are of the following structure: for high states, both types attack with certainty; for a (possibly empty) interval of intermediate states, no type attacks; and for low states, the high type attacks while the low type mixes. I fully characterize equilibria in this class with closed-form solutions. | |||||

4/26/2013 | 12-1pm | D344 LSRC | Robert Kleinberg | An Analysis of One-Dimensional Schelling Segregation | info |

An Analysis of One-Dimensional Schelling Segregation In 1969, economist Thomas Schelling introduced a landmark model of residential segregation in which individuals move out of neighborhoods where their ethnicity constitutes a minority. The model is influential both for its policy implications - bearing on the causes of segregation and its possible remedies - and for its theoretical implications, as a benchmark model exemplifying the emergence of macroscopic phenomena from random local interactions. Our work presents the first rigorous analysis of Schelling's original model, focusing on the case of one-dimensional ring networks. Our results call into question the conventional wisdom about Schelling segregation by showing that the average size of segregated regions scales only linearly in the size of each individual's neighborhood. This talk is based on joint work with Christina Brandt, Nicole Immorlica, and Gautam Kamath. Biography Robert Kleinberg is an Assistant Professor of Computer Science at Cornell University. His research studies the design and analysis of algorithms, and their applications to electronic commerce, networking, information retrieval, and other areas. Prior to receiving his doctorate from MIT in 2005, Kleinberg spent three years at Akamai Technologies, where he assisted in designing the world's largest Internet Content Delivery Network. He is the recipient of a Microsoft Research New Faculty Fellowship, an Alfred P. Sloan Foundation Fellowship, and an NSF CAREER Award. |

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

1/27/2012 | 10 am-11:30 am | McKinley Seminar Room, Fuqua | Thanh Nguyen | Bargaining in Networks | info |

Bargaining in Networks It is believed that knowledge of the links between economic agents is useful for understanding things like the distribution of wealth and market fluctuations. This motivates the study of economic networks that integrates strategic interactions with the underlying network structure. In this talk I analyze the outcome of a non-cooperative bargaining process, where the interactions between agents are constrained by a network. I develop a general framework, based on convex programming, to characterize the unique stationary equilibrium outcome of this game. This equilibrium captures an explicit connection between an agent's bargaining power and the centrality of his position in the network. I also extend this framework to study how a networked bargaining process can give rise to endogenous market fluctuations. |
|||||

2/8/2012 | 11:45 am-12:45 pm | D106 LSRC | Ashwin Machanavajjhala | No Free Lunch in Data Privacy | info |

No Free Lunch in Data Privacy Tremendous amounts of personal data about individuals is being collected and shared online. Legal requirements and an increase in public awareness due to egregious breaches of individual privacy have made data privacy an important field of research. Recent research, culminating in the development of a powerful notion called differential privacy, have transformed this field from a black art into a rigorous mathematical discipline. This talk critically analyzes trade-off between accuracy and privacy in the context of social advertising -- recommending people, products or services to users based on their social neighborhood. I will present a theoretical upper bound on the accuracy of performing recommendations that are solely based on a user's social network, for a given level of (differential) privacy of sensitive links in the social graph. I will show using real networks that good private social recommendations are feasible only for a small subset of the users in the social network or for a lenient setting of privacy parameters. I will also describe some exciting new research about a no free lunch theorem, which argues that privacy tools (including differential privacy) cannot simultaneously guarantee utility as well as privacy for all types of data, and conclude with directions for future research in data privacy and big-data management. |
|||||

2/10/2012 | 4:15 pm-5:15 pm | D344 LSRC | Josh Letchford | Computing Randomized Security Strategies in Networked Domains | info |

Computing Randomized Security Strategies in Networked Domains
Tremendous amounts of personal data about individuals is being collected and shared online. Legal requirements and an increase in public awareness due to egregious breaches of individual privacy have made data privacy an important field of research. Recent research, culminating in the development of a powerful notion called differential privacy, have transformed this field from a black art into a rigorous mathematical discipline. Josh Letchford is a Ph.D. Candidate in the Department of Computer Science, Duke University. |
|||||

3/14/2012 | 11:45 am-12:45 pm | D106 LSRC | Grant Schoenebeck | Discovering Social Network Structure | info |

Discovering Social Network Structure
What do social networks look like? With increasing amounts of data and computational power, it seems like we are closer than ever to answering this question. However, there may be limits to the kinds of properties that can be reconstructed from sampled data. Moreover, even if we had all the data, we might still be asking computationally infeasible questions that require solving intractable problems. For example, can we really expect to uncover all the dense regions of a social network when in the worst case this problem is NP-hard? This talk will be a computer scientist's view of where we are in our understanding of social network structure and what future directions may lead to deeper insights. Grant Schoenebeck is currently the Simons Foundation Postdoctoral Fellow in Theoretical Computer Science at Princeton University, the Senior Postdoctoral Fellow at the Center for Computational Intractability, and a visitor at the Institute for Advanced Study. Dr. Schoenebeck studies how computational approaches and insights can help us to better understand the world we live in. His research interests span several fields of theoretical computer science including computational complexity theory and topics intersecting theoretical computer science and social and economic networks. He received his PhD in computer science from UC Berkeley, and graduated from Harvard University with highest honors in mathematics while also earning a master's (SM) in computer science. |
|||||

4/2/2012 | 11:45 am-12:45 pm | D106 LSRC | Jing Chen | Resilient Mechanism Design and Combinatorial Auctions | |

9/10/2012 | 4-5 pm | 130A North Bldg, Duke | Tim Roughgarden | Porting the Computer Science Toolbox to Game Theory and Economics | info |

Porting the Computer Science Toolbox to Game Theory and Economics
Theoretical computer science has brought new ideas and
techniques to game and economic theory. A primary signature of the
computer science approach is {em approximation} --- the idea of
building credibility for a proposed solution by proving that its
performance is always within a small factor of an ideal (and typically
unimplementable) solution. We explain two of our recent contributions
in this area, one motivated by networks and one by auctions.
Tim Roughgarden received his Ph.D. from Cornell University in 2002 and joined the Stanford CS department in 2004, where he is currently an associate professor. His research interests are in theoretical computer science, especially its interfaces with game theory and networks. He wrote the book "Selfish Routing and the Price of Anarchy" (MIT Press, 2005) and co-edited the book "Algorithmic Game Theory", with Nisan, Tardos, and Vazirani (Cambridge, 2007). His awards include the 2002 ACM Doctoral Dissertation Award (Honorable Mention), the 2003 Tucker Prize, a 2007 PECASE Award, the 2008 Shapley Lectureship of the Game Theory Society, the 2009 ACM Grace Murray Hopper Award, and the 2012 EATCS-SIGACT Godel Prize. He recently developed a free online course on the design and analysis of algorithms, enrolling tens of thousands of students. |
|||||

8/15/2012 | 12-1pm | LSRC D344 | Hossein Azari Soufiani | Random Utility for Social Choice | info |

Random Utility for Social Choice Random utility theory models an agent's preferences on alternatives by drawing a real-valued score on each alternative (typically independently) from a parameterized distribution, and then ranking according to scores. A special case that has received significant attention is the Plackett-Luce model, for which fast inference methods for maximum likelihood estimators are available. We develop conditions on general, random utility models that enable fast inference within a Bayesian framework through MC-EM, providing unimodal log-likelihood functions. Results on both real-world and simulated data provide support for the scalability of the approach, despite its flexibility. | |||||

8/17/2012 | 12-1pm | LSRC D344 | Markus Brill | Computing Set-Valued Solution Concepts in Social Choice and Game Theory | info |

Computing Set-Valued Solution Concepts in Social Choice and Game Theory In this talk, I will present recent results on the computational complexity
of solution concepts from both game theory and social
choice theory.
| |||||

8/28/2012 | 11:40am-1pm | SocSci 113 | Milind Tambe | Game theory for security: Key algorithmic principles, deployed systems, lessons learned | info |

Game theory for security: Key algorithmic principles, deployed systems, lessons learned Security is a critical concern around the world, whether it.s the
challenge of protecting ports, airports and other critical national
infrastructure, or protecting wildlife and forests, or suppressing crime
in urban areas. In many of these cases, limited security resources
prevent full security coverage at all times; instead, these limited
resources must be scheduled, avoiding schedule predictability, while
simultaneously taking into account different target priorities, the
responses of the adversaries to the security posture and potential
uncertainty over adversary types.
| |||||

9/7/2012 | 12-1pm | D344 LSRC | Troels Sorensen | On the Approximation Performance of Fictitious Play in Finite Games | info |

On the Approximation Performance of Fictitious Play in Finite Games We study the performance of Fictitious Play, when used as a heuristic for finding an approximate Nash equilibrium of a two-player game. We exhibit a class of two-player games having payoffs in the range [0; 1] that show that Fictitious Play fails to find a solution having an additive approximation guarantee significantly better than 1/2. Our construction shows that for n x n games, in the worst case both players may perpetually have mixed strategies whose payoffs fall short of the best response by an additive quantity 1/2-O(1/n^(1-delta) ) for arbitrarily small delta. We also show an essentially matching upper bound of 1/2-O(1/n). | |||||

9/14/2012 | 12-1pm | D344 LSRC | Ashish Goel | Trust and Mistrust in Social Networks | info |

Trust and Mistrust in Social Networks Automated reputation and trust systems play an ever increasing role in the emerging networked society. We will first describe a model of networked trust that functions by exchange of IOUs among participants. Informally, every node acts as a bank and prints its own currency, which is then used to purchase services within the network. Such "trust networks" are robust to infiltration, since any node only accepts currency printed by other nodes that it directly trusts. We will analyze the liquidity of this model, i.e., the number of transactions that such a network can support. We will show that in many interesting cases, the liquidity of these trust networks is comparable to a system where currency is issued by a single centralized bank.
| |||||

9/21/2012 | 12-1pm | LSRC D344 | Angelina Vidali | Mechanism design; Auctions and Scheduling | info |

Mechanism design; Auctions and Scheduling I will give an introduction and present some of my recent results in one of
the most fundamental problems in algorithmic game
theory and mechanism design: the problem of
scheduling unrelated machines to minimize the
makespan. I will emphasize the connection between
this problem and the problem of designing truthful
auctions for selling multiple items. I will also
present a geometrical characterization of
truthfulness and also some very recent work on
strongly truthful mechanisms.
| |||||

11/2/2012 | 11:45am-12:45pm | McKinley Seminar room, Fuqua (PDF map) | Matt Jackson | Diffusion of Microfinance | info |

Diffusion of Microfinance We examine how participation in a microfinance program diffuses through social networks, using detailed demographic, social network, and participation data from 43 villages in South India. We exploit exogenous variation in the importance (in a network sense) of the people who were first informed about the program, the "injection points." Microfinance participation is significantly higher when the injection points have higher eigenvector centrality. We also estimate structural models of diffusion that allow us to (i) determine the relative roles of basic information transmission versus other forms of peer influence, and (ii) distinguish information passing by participants and nonparticipants. We find that participants are significantly more likely to pass information on to friends and acquaintances than informed non-participants. However, information passing by non-participants is still substantial and significant, accounting for roughly one-third of informedness and participation. We also find that, once we have properly conditioned on an individual being informed, her decision to participate is not significantly affected by the participation of her acquaintances. | |||||

11/8/2012 | 4-5:30pm | McKinley Seminar Room, Fuqua (PDF map) | Jason Hartline | Approximation in Mechanism Design, Part 1 | info |

Approximation in Mechanism Design, Part 1 In the first part, I will review classical auction theory. Topics
covered include: Bayes-Nash equilibrium (BNE), characterization of
BNE, revenue equivalence, solving for BNE, and revenue maximization
(virtual values and marginal revenue). This theory provides a basis
for all of auction theory.
| |||||

11/9/2012 | 11:45am-1:15pm | McKinley Seminar Room, Fuqua (PDF map) | Jason Hartline | Approximation in Mechanism Design, Part 2 | info |

Approximation in Mechanism Design, Part 2 In the first part, I will review classical auction theory. Topics
covered include: Bayes-Nash equilibrium (BNE), characterization of
BNE, revenue equivalence, solving for BNE, and revenue maximization
(virtual values and marginal revenue). This theory provides a basis
for all of auction theory.
| |||||

11/12/2012 | 4:30-6pm | Social Sciences 113 | Jason Hartline | The Simple Economics of Approximately Optimal Auctions (also see the invited lectures below) | info |

The Simple Economics of Approximately Optimal Auctions (also see the invited lectures below) Paper abstract:
| |||||

11/29/2012 | 4:30-6pm | North Building 130A | Tuomas Sandholm | Algorithms for Large Incomplete-Information Games | info |

Algorithms for Large Incomplete-Information Games Incomplete-information games - such as most auctions, negotiations, card games, and future security games - cannot be solved using minimax search; rather, totally different algorithms are needed. I will overview our work on incomplete-information games from the last seven years. I will start by discussing lossless and lossy abstraction algorithms. I will present brand new results on the first lossy abstraction algorithms with bounds on solution quality. I will then discuss first-order algorithms for finding an epsilon-equilibrium in two-person zero-sum games; one of these algorithms has O(log(1/epsilon)) convergence, which is exponentially faster than prior first-order methods. I will then discuss how qualitative knowledge of the structure of equilibrium can be used to enable and speed up equilibrium finding. I will finish with brand new results on safe opponent exploitation and strategy purification. We have applied the techniques discussed in this talk to poker, yielding some of the world's best poker-playing programs. Dr. Tuomas Sandholm is Professor in the Computer Science Department at Carnegie Mellon University. He has published over 450 papers on game theory; electronic commerce; artificial intelligence; auctions and exchanges; multiagent systems; automated negotiation and contracting; coalition formation; voting; search and integer programming; safe exchange; normative models of bounded rationality; resource-bounded reasoning; machine learning; and networks. He has over 20 years of experience building optimization-based electronic marketplaces, and has fielded several of his systems. He was Founder, Chairman, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 large-scale generalized combinatorial auctions, with over $60 billion in total spend and over $6 billion in generated savings. Dr. Sandholm.s algorithms also run the US-wide UNOS kidney exchange. His new startup, Optimized Markets, Inc., is focused on advertising markets. Since early 2009, he has been the design consultant of Baidu.s sponsored search auctions; Baidu.s market cap increased 5x to $50 billion during this period due to better monetization. He has also served as consultant, advisor, or board member for Yahoo!, Google, Rare Crowds, Granata Decision Systems, Netcycler, and others. He has applied his game-solving algorithms to develop some of the world.s best poker-playing programs. He holds a PhD and MS in computer science and a Dipl. Eng. with distinction in Industrial Engineering and Management Science. He is recipient of the NSF Career Award, the inaugural ACM Autonomous Agents Research Award, the Alfred P. Sloan Foundation Fellowship, the Carnegie Science Center Award for Excellence, and the Computers and Thought Award. He is Fellow of the ACM and AAAI. | |||||

11/30/2012 | 12-1pm | McKinley Seminar Room, Fuqua | Tuomas Sandholm | Modern Dynamic Kidney Exchanges | info |

Modern Dynamic Kidney Exchanges In kidney exchanges, patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. The clearing problem involves finding a social welfare maximizing set of non-overlapping short cycles. We proved this is NP-hard. It was one of the main obstacles to a national kidney exchange. We developed the first algorithm capable of clearing these exchanges optimally on a nationwide scale. The key was incremental problem formulation because the formulation that gives tight LP bounds is too large to even store. On top of the branch-and-price paradigm we developed techniques that dramatically improve runtime and memory usage. Furthermore, clearing is actually an online problem where patient-donor pairs and altruistic donors appear and expire over time. We first developed trajectory-based online stochastic optimization algorithms (that use our optimal offline solver as a subroutine) for this. Such algorithms tend to be too compute-intensive at run time, so recently we developed a general approach for giving batch algorithms guidance about the future without a run-time hit. It learns potentials of elements of the problem, and then uses them in each batch clearing. I will share experiences from using our algorithms to run the UNOS US-wide kidney exchange and two regional ones. We introduced several design enhancements to the exchanges. For one, we used our algorithms to launch the first never-ending altruistic donor chains. I will present new results . both theoretical and experimental . on the role of long chains. I will also discuss planning that is robust against last-minute execution failures. The talk covers material from the following papers: - Dynamic Matching via Weighted Myopia with Application to Kidney Exchange. AAAI-12. (With Dickerson, J. and Procaccia, A.) - Optimizing Kidney Exchange with Transplant Chains: Theory and Reality. AAMAS-12. (With Dickerson, J. and Procaccia, A.) - Online Stochastic Optimization in the Large: Application to Kidney Exchange. IJCAI-09. (With Awasthi, P.) - A Nonsimultaneous, Extended, Altruistic-Donor Chain. New England Journal of Medicine 360(11), March 2009. (With Rees, M., Kopke, J., Pelletier, R., Segev, D., Rutter, M., Fabrega, A., Rogers, J., Pankewycz, O., Hiller, J., Roth, A., Unver, U., and Montgomery, R.) - Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges. EC-07. (With Abraham, D. and Blum, A.) Dr. Tuomas Sandholm is Professor in the Computer Science Department at Carnegie Mellon University. He has published over 450 papers on game theory; electronic commerce; artificial intelligence; auctions and exchanges; multiagent systems; automated negotiation and contracting; coalition formation; voting; search and integer programming; safe exchange; normative models of bounded rationality; resource-bounded reasoning; machine learning; and networks. He has over 20 years of experience building optimization-based electronic marketplaces, and has fielded several of his systems. He was Founder, Chairman, and CTO/Chief Scientist of CombineNet, Inc. from 1997 until its acquisition in 2010. During this period the company commercialized over 800 large-scale generalized combinatorial auctions, with over $60 billion in total spend and over $6 billion in generated savings. Dr. Sandholm.s algorithms also run the US-wide UNOS kidney exchange. His new startup, Optimized Markets, Inc., is focused on advertising markets. Since early 2009, he has been the design consultant of Baidu.s sponsored search auctions; Baidu.s market cap increased 5x to $50 billion during this period due to better monetization. He has also served as consultant, advisor, or board member for Yahoo!, Google, Rare Crowds, Granata Decision Systems, Netcycler, and others. He has applied his game-solving algorithms to develop some of the world.s best poker-playing programs. He holds a PhD and MS in computer science and a Dipl. Eng. with distinction in Industrial Engineering and Management Science. He is recipient of the NSF Career Award, the inaugural ACM Autonomous Agents Research Award, the Alfred P. Sloan Foundation Fellowship, the Carnegie Science Center Award for Excellence, and the Computers and Thought Award. He is Fellow of the ACM and AAAI. | |||||

12/3/2012 | 12-1pm | D344 LSRC | Sayan Bhattacharya | Competitive Equilibria with Budget Limits | info |

Competitive Equilibria with Budget Limits We consider the problem of finding a competitive equilibrium when agents have budget constraints and items are indivisible. In our model, the utility of an agent is equal to her valuation minus payment when the payment is no more than her budget; otherwise she gets negative utility. In a competitive equilibrium, every item is assigned a non-negative price, and the items are allocated to the agents in such a way that every agent gets her utility-maximizing bundle, and every unallocated item has price zero. Most previous work on competitive equilibria (with and without budgets) has focused on the {m Gross Substitutes} case, where a competitive equilibrium is guaranteed to exist modulo tie-breaking. Our work deals with two types of valuation functions that do not satisfy this property in the presence of budget constraints: Additive valuations, and concave combinatorial valuations. We present some hardness results for deciding the existence of exact competitive equilibria. Next, we develop connections to a related Fisher model of market clearing in order to design poly-time algorithms that compute approximate competitive equilibria. Not only do our algorithms require several new technical ideas, but we also show that this approach sheds new light on algorithms for the Fisher model. In particular, it yields an interpretation of the well-known Eisenberg-Gale convex program as optimizing a measure of social welfare. We also prove strong revenue properties of the equilibria we construct. We thereby improve and extend the work on revenue maximizing envy-free multi-unit auctions by Feldman et al. in EC 2012. Joint work with Vincent Conitzer, Kamesh Munagala and Xiaoming Xu. |

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

01/17/2014 | 11:30am-1pm | Social Sciences 113 | Benjamin Brooks (Princeton) | Surveying and Selling: Belief and Surplus Extraction in Auctionsi | info |

Surveying and Selling: Belief and Surplus Extraction in Auctionsi TBA | |||||

2/24/2012 | 12-1pm | D344 LSRC | Troels Sorensen | Computing proper equilibria of finite two-player games | info |

Computing proper equilibria of finite two-player games Nash equilibria are commonly used to predict outcomes of strategic interactions. However, the simple stability condition defining equilibria often allow solutions that rely on irrational behavior by the agents. This issue has been addressed by different refinements of the equilibrium conditions. One of the most restrictive solution concepts, that are still guaranteed to exist, is the Proper equilibrium. The existing algorithms for computing this relies on numerically solving problems that are ill-conditioned, and therefore fail to solve larger games. In this talk, I present a new algorithm that avoid these problems, and computes an exact proper equilibrium. Troels Bjerre Sorensen is a Research Fellow at the Deparment of Computer Science, University of Warwick. |
|||||

3/16/2012 | 12-1pm | D344 LSRC | Sebastien Lahaie | A Tractable Combinatorial Market Maker Using Constraint Generation | info |

A Tractable Combinatorial Market Maker Using Constraint Generation We present a new automated market maker that provides liquidity across multiple logically interrelated securities. Our approach lies somewhere between the industry standard---treating related securities as independent and thus not transmitting any information from one security to another---and a full combinatorial market maker for which pricing is computationally intractable. Our market maker, based on convex optimization and constraint generation, is tractable like independent securities yet propagates some information among related securities like a combinatorial market maker, resulting in more complete information aggregation. We prove several favorable properties of our scheme and evaluate its information aggregation performance on survey data involving hundreds of thousands of complex predictions about the 2008 U.S. presidential election. Sebastien Lahaie is a research scientist at Yahoo! Research. Sebastien received his PhD in Computer Science from Harvard University in 2007. At Yahoo! his research focuses on marketplace design, including sponsored search and display advertising. He is interested in designing market algorithms that scale well and properly anticipate user behavior. Other interests include preference elicitation, reputation systems, combinatorial optimization, and mechanism design. |
|||||

3/30/2012 | 12-1pm | D344 LSRC | Hamid Nazerzadeh | Optimal Dynamic Mechanism Design and the Virtual Pivot Mechanism | info |

Optimal Dynamic Mechanism Design and the Virtual Pivot Mechanism
We consider the problem of designing optimal mechanisms for settings
where agents have dynamic private information. We present the
Virtual-Pivot Mechanism, that is optimal in a large class of
environments that satisfy a separability condition. The mechanism
satisfies a rather strong equilibrium notion (it is periodic ex-post
incentive compatible and individually rational). We provide both
necessary and sufficient conditions for immediate incentive
compatibility for mechanisms that satisfy periodic ex-post incentive
compatibility in future periods. The result also yields a simple
mechanism for selling a sequence of items to a single buyer. We also
show the allocation rule of the Virtual-Pivot Mechanism has a very
simple structure (a Virtual Index) in multi-armed bandit settings.
Hamid Nazerzadeh is an Assistant Professor in the Information and Operations Management Department at the Marshall School of Business, University of Southern California. He received his Ph.D. in Operations Research from Stanford University. His research interests include market design, revenue management, and optimization algorithms. Prior to joining the Marshall School, he was post-doc researcher at Microsoft Research, New England lab. He holds several patents on Internet advertising and cloud computing services. |
|||||

4/13/2012 | 12-1pm | McKinley Seminar Room, Fuqua | Rakesh Vohra | Auction design with fairness concerns: subsidies vs. set-asides | info |

Auction design with fairness concerns: subsidies vs. set-asides
Government procurement and allocation programs often
use subsidies and set-asides favoring small
businesses and other target groups to address
fairness concerns. These concerns are in addition to
standard objectives such as efficiency and
revenue. We study the design of the optimal mechanism
for a seller concerned with efficiency, subject to a
constraint to favor a target group. In our model,
buyers' private values are determined by costly
pre-auction investment. If the constraint is
distributional, i.e. to guarantee that the target
group wins `sufficiently often', then the constrained
efficient mechanism is a flat subsidy. This is
consistent with findings in the empirical
literature. In contrast, if the constraint is to
ensure a certain investment level by the target
group, the optimal mechanism is a type dependent
subsidy. In this case a set aside may be better than
a flat or percentage subsidy.
Rakesh Vohra is the John L. and Helen Kellogg Professor of Managerial Economics and Decision Sciences at Kellogg School of Management, Northwestern University. |
|||||

4/27/2012 | 12-1pm | LSRC D344 | Taiki Todo | False-name-proof Mechanism Design without Money | info |

False-name-proof Mechanism Design without Money Mechanism design studies how to design mechanisms that result in good outcomes even when agents strategically report their preferences. In traditional settings, it is assumed that a mechanism can enforce payments to give an incentive for agents to act honestly. However, in many Internet application domains, introducing monetary transfers is impossible or undesirable. Also, in such highly anonymous settings as the Internet, it is possible for an agent to pretend to be multiple agents and submit multiple reports under different identifiers. The effect of such false-name manipulations can be more serious in a mechanism without monetary transfers, since submitting multiple reports would be less risky. In this talk, I will present a case study in false-name-proof mechanism design without money. In the basic setting, agents are located on a real line, and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. This setting is called the facility location problem and can represent various situations where an agent's preference is single-peaked. First, I present a full characterization of false-name-proof mechanisms in this basic setting and the tight bounds of the approximation ratios for three objective functions. I then extend these results in two natural directions. Finally, I will share with you current research issues on false-name-proof mechanism design without money. This talk is based on a joint work with Atsushi Iwasaki and Makoto Yokoo. |

Date | Time | Place | Speaker | Topic |
---|---|---|---|---|

09/23/11 | 11.45am - 12.45pm | D344 LSRC | Troels Sørensen | Risk when playing for broke |

10/21/11 | 11.45am - 12.45pm | D344 LSRC | Arpita Ghosh | Incentivizing High Quality User-Generated Content |

11/04/11 | 11.45 am - 12.45 pm | D344 LSRC | Rahul Sami | Sybilproof Transitive Trust Protocols |

11/11/11 | 11.45 am - 12.45 pm | 2008 MBA Classroom, Fuqua | Preston McAfee | Economics and Machine Learning |

11/18/11 | 11.45 am - 12.45 pm | D344 LSRC | Kevin Leyton-Brown | Beyond Equilibrium: Predicting Human Behavior in Normal Form Games |

12/02/11 | 11.45 am - 1.15 pm | D344 LSRC | Ben Lubin | Market Mechanisms for Data Center Resource Allocation and Their Pricing |

Comments to webmaster@cs.duke.edu | Report an error on this page | © Duke University Department of Computer Science 2016.