Exploring the Intersection of Computer Science and Economics

Date | Time | Place | Speaker | Topic | |
---|---|---|---|---|---|

2017-11-17 | 12:00-01:00 | Gross Hall 304B, Duke | Markus Brill | Approval Voting, Proportional Representation, & Liquid Democracy | info |

Approval Voting, Proportional Representation, & Liquid Democracy In this talk, I will survey recent results concerning the proportional representation of voters in approval-based committee elections. In this setting, voters express their preferences over a set of candidates by specifying, for each candidate, whether they approve (\"like\") that candidate or not. Based on these approval votes, a fixed-size subset of winning candidates needs to be selected. I will also discuss proportional rankings, where the goal is to rank-order the candidates in a way that is representative of voters' preferences. As an example application scenario where such rankings are desirable, I will briefly outline some ideas related to \"liquid democracy\", an exciting new paradigm for collective decision making with many participants. | |||||

2017-10-20 | 12:00-01:00 | Gross 304B, Duke | Hu Fu | The Value of Information Concealment | info |

The Value of Information Concealment We consider a revenue optimizing seller selling a single item to a buyer, on whose private value the seller has a noisy signal. We show that, when the signal is kept private, arbitrarily more revenue could potentially be extracted than if the signal is leaked or revealed. We then show that, if the seller is not allowed to make payments to the buyer, the gap between the two is bounded by a multiplicative factor of 3 when the value distribution conditioning on each signal is regular. We give examples showing that both conditions are necessary for a constant bound to hold. \ \ We connect this scenario to multi-bidder single-item auctions where bidders'values are correlated. Similarly to the setting above, we show that the revenue of a Bayesian incentive compatible, ex post individually rational auction can be arbitrarily more than that of a dominant strategy incentive compatible auction, whereas the two are no more than a factor of 5 apart if the auctioneer never pays the bidders and if each bidder's value distribution conditioning on the others' is regular. The upper bounds in both settings degrade gracefully when the distribution is a mixture of a small number of regular distributions. This is joint work with Chris Liaw, Pinyan Lu, and Zhihao Gavin Tang. \ | |||||

2017-09-29 | 12:00-01:00 | Gross Hall 304B, Duke | Christian Kroer | Multiplicative Pacing Equilibria in Auction Markets | info |

Multiplicative Pacing Equilibria in Auction Markets Budgets play a significant role in real-world sequential auction markets such as those implemented by Internet companies. To maximize the value provided to auction participants, spending is smoothed across auctions so budgets are used for the best opportunities. This paper considers a smoothing procedure that relies on {\\em pacing multipliers}: for each bidder, the platform applies a factor between 0 and 1 that uniformly scales the bids across all auctions. Looking at this process as a game between all bidders, we introduce the notion of {\\em pacing equilibrium}, and prove that they are always guaranteed to exist. We demonstrate through examples that a market can have multiple pacing equilibria with large variations in several natural objectives. We go on to show that computing either a social-welfare-maximizing or a revenue-maximizing pacing equilibrium is NP-hard. Finally, we develop a mixed-integer program whose feasible solutions coincide with pacing equilibria, and show that it can be used to find equilibria that optimize several interesting objectives. Using our mixed-integer program, we perform numerical simulations on synthetic auction markets that provide evidence that, in spite of the possibility of equilibrium multiplicity, it occurs very rarely across several families of random instances. We also show that solutions from the mixed-integer program can be used to improve the outcomes achieved in a more realistic adaptive pacing setting. | |||||

2017-04-14 | 12:00-01:00 | Gross 318, Duke, Duke | Mathijs de Weerdt | Mechanism design and computational challenges in the smart grid | info |

Mechanism design and computational challenges in the smart grid Renewable energy sources are not as predictable and controllable as carbon-based generators. Therefore, the future electricity system requires flexible demand and more communication. This brings a collection of novel and challenging mechanism design and computational questions. In this talk, I will survey these challenges (e.g. from [3,4]), and elaborate on two of our contributions: an online scheduling mechanism under uncertain supply and demand using consensus [5], and a cooperative algorithm for congestion management under uncertainty [1,2]. | |||||

2017-03-24 | 12:00-01:00 | Gross 318, Duke | Yu Cheng | Computational Aspects of Optimal Information Revelation | info |

Computational Aspects of Optimal Information Revelation Understanding the role of information in strategic interactions is becoming more and more important in the age of information we live in today. The design of information structures is emerging as a new area of mechanism design for information, an area that is still largely unexplored. In this talk, we are going to examine the optimization problem faced by an informed principal, who must choose how to reveal partial information in order to induce a desirable equilibrium, a task often referred to as signaling or persuasion. We settle the computational complexity of optimal signaling in two of the most fundamental game-theoretic scenarios: Bayesian zero-sum games and network routing games. | |||||

2017-03-03 | 12:00-01:00 | North Bldg. 311 (directions below), Duke | Simina Branzei | Computational Fair Division and Mechanism Design | info |

Computational Fair Division and Mechanism Design The emergence of online platforms has brought about a fundamental shift in economic thinking: the design of economic systems is now a problem that computer science can tackle. For the first time we are able to move from the study of economic systems as mainly natural systems to carefully designing and executing them on a computer. Prominent examples of digital market mechanisms include auctions for ads (run by companies such as Google) and electromagnetic spectrum (used by the US government). I will discuss several recent developments in fair division and mechanism design. I will start with a dictatorship theorem for fair division (cake cutting), showing that requiring truthfulness gives rise to a dictator. Next, I will turn to the paradigmatic model of multi-unit markets, where I will show that truthfulness requires discarding resources, but this problem can be alleviated when the market is even mildly competitive. More generally, I will discuss the theme of simplicity and complexity in mechanism design, and the interplay between economics and computation and learning. | |||||

2017-02-24 | 12:00-01:00 | Gross 318, Duke | Jacob Leshno (Columbia) | The Simple Structure of Top Trading Cycles in School Choice: A Continuum Model (Joint with Irene Lo) | info |

The Simple Structure of Top Trading Cycles in School Choice: A Continuum Model (Joint with Irene Lo) Many cities determine the assignment of students to schools through a school choice mechanism which calculates an assignment based on student preferences and school priorities. The prominent Top Trading Cycles (TTC) algorithm is strategy-proof and Pareto efficient, but the combinatorial description of TTC makes it non-transparent to parents and difficult to analyze for designers. We give a tractable characterization of the TTC mechanism for school choice: the TTC assignment can be simply described by n^2 cutoffs, where n is the number of schools. We define TTC in a continuum model, in which these thresholds can be directly calculated by a differential equation. The model also allows us to compute comparative statics, analyze sequential trade procedures, and show that changes in the priority structure can have non-trivial indirect effects on the allocation. We also apply this model to solve for optimal investment in school quality, and help explain empirical findings about the relation between the TTC assignment and the Deferred Acceptance assignment. To validate the continuum model we show that it gives a good approximation for strongly converging economies. Our analysis draws on an interesting connection between continuous trading procedures and continuous time Markov chains. | |||||

2016-12-02 | 12:00-01:00 | Gross 318, Duke | John P. Dickerson | Better Matching Markets via Optimization | info |

Better Matching Markets via Optimization The exchange of indivisible goods without money addresses a variety of constrained economic settings where a medium of exchange, such as money, is considered inappropriate. Participants are either matched directly with another participant or, in more complex domains, in barter cycles and chains with other participants before exchanging their endowed goods. We show that techniques from computer science and operations research, combined with the recent availability of massive data and inexpensive computing, can guide the design of such matching markets and enable the markets by running them in the real world. A key application domain for our work is kidney exchange, an organized market where patients with end-stage renal failure swap willing but incompatible donors. We present new models that address three fundamental dimensions of kidney exchange: (i) uncertainty over the existence of possible trades, (ii) balancing efficiency and fairness, and (iii) inherent dynamism. For each dimension, we design scalable branch-and-price-based integer programming market clearing methods. Next, we combine these dimensions, along with high-level human-provided guidance, into a unified framework for learning to match in a general dynamic setting. This framework, which we coin FutureMatch, takes as input a high-level objective (e.g., \"maximize graft survival of transplants over time\") decided on by experts, then automatically learns based on data how to make this objective concrete and learns the \"means\" to accomplish this goal, a task that in our experience, humans handle poorly. | |||||

2016-10-28 | 12:00-01:00 | D344 LSRC, Duke | Balaji S. Srinivasan | How Bitcoin enables a Machine-Payable Web | info |

How Bitcoin enables a Machine-Payable Web First, we had the World Wide Web, a web of links between documents. Then we had the Social Web, a social network of relationships between people. We believe the third web will be the Machine-Payable Web, where each node in the network is a machine and each edge is a micropayment between machines. Towards this end, we've developed open source software called 21 that makes it simple to perform micropayments over HTTP. The software allows you to get digital currency onto any machine headlessly, set up web services that accept and transmit bitcoin over HTTP, and discover other machines with similar services to autonomously trade with. The overall effect is to turn digital currency into the ultimate scarce system resource, on par with CPU, RAM, and hard drive space. Just as one can create a database index that consumes disk space to save time, we show that one can now write code that instead spends digital currency to outsource a computation to save time. To illustrate the breadth of the implications, we conclude with several working examples: bitcoin-aware intelligent agents, APIs that implement autonomous surge pricing, and the development of a native market datastructure as a first class alternative to the well known queue. We ask that audience members bring their laptops to code along with the speaker! | |||||

2016-08-26 | 01:30-02:30 | Gross 318, Duke | Neeldhara Misra | Elicitation for Preferences Single Peaked on Trees | info |

Elicitation for Preferences Single Peaked on Trees This talk will focus on the problem of preference elicitation, where the goal is to understand the preferences of agents (which we model by total orders) by querying them about their pairwise preferences. We will survey known results, which have studied the problem both on general domains and structured ones, such as the domain of single-peaked preferences. As one might expect, structured domains admit a lower query complexity. We will consider domains that are single peaked over trees, which generalize the notion of single-peakedness. We will see that the query complexity is influenced by the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We will also consider the query complexity for finding a weak Condorcet winner for this domain | |||||

2016-06-01 | 12:00-01:00 | Gross Hall 304B, Duke | Emmanouil (Manolis) Pountourakis | Procrastination with Variable Present Bias | info |

Procrastination with Variable Present Bias Individuals working towards a goal often exhibit time inconsistent behavior, making plans and then failing to follow through. One well-known model of such behavioral anomalies is present-bias discounting: individuals over-weight present costs by a bias factor. This model explains many time-inconsistent behaviors, but can make stark predictions in many settings: individuals either follow the most efficient plan for reaching their goal or procrastinate indefinitely. We propose a modification in which the present-bias parameter can vary over time, drawn independently each step from a fixed distribution.Following Kleinberg and Oren (2014), we use a weighted task graph to model task planning, and measure the cost of procrastination as the relative expected cost of the chosen path versus the optimal path. We use a novel connection to optimal pricing theory to describe the structure of the worst-case task graph for any present-bias distribution. We then leverage this structure to derive conditions on the bias distribution under which the worst-case ratio is exponential (in time) or constant. We also examine conditions on the task graph that lead to improved procrastination ratios: graphs with a uniformly bounded distance to the goal, and graphs in which the distance to the goal monotonically decreases on any path. Joint work with Nick Gravin, Nicole Immorlica, and Brendan Lucier. | |||||

2016-05-31 | 12:00-01:00 | Gross Hall 304B, Duke | Michael Albert | Mechanism Design with Correlated Distributions: Limited Correlation and Uncertain Distributions | info |

Mechanism Design with Correlated Distributions: Limited Correlation and Uncertain Distributions The standard assumption in the mechanism design literature is that the agents valuations over the outcome are independently distributed. However, in many situations, it is natural to assume that agents valuations are correlated. When valuations are correlated, many of the standard negative results in mechanism design do not apply. Notably, in a monopolistic auction with n bidders, the seller can extract the full social surplus as revenue, given certain assumptions about the correlation structure. However, existing results have two limitations: 1) They provide sufficient, but not necessary conditions for full surplus extraction, and 2) they require perfect knowledge of the distribution over agents' valuations. In this talk, I will present research where we provide necessary and sufficient conditions for full surplus extraction under correlated valuations for the case of a monopolistic seller selling one good. Further, we extend the concept of incentive compatibility and individual rationality to encompass uncertainty over the distribution of agents' valuations and present a polynomial time algorithm in the agent types for designing these mechanisms. Finally, we demonstrate experimentally that these \"robust\" mechanisms can significantly outperform other mechanism design paradigms when the true distribution is estimated. | |||||

2016-04-01 | 12:00-01:00 | 318 Gross Hall, Duke | Siddhartha Banerjee | Dynamic Pricing in Rideshare Platforms | info |

Dynamic Pricing in Rideshare Platforms Ride-sharing platforms such as Lyft and Uber are among the fastest growing online marketplaces. A key feature of these platforms is the implementation of fine-grained, fast-timescale dynamic pricing -- where prices can react to the instantaneous system state, and across very small geographic areas. In this talk, we will explore the value of such fast-timescale dynamic pricing, using a queueing model for ride-share platforms, which combines the stochastic dynamics of the platform's operations with strategic models of both passenger and driver behavior. Using this model, we will show that dynamic pricing is not better than the optimal static price in most settings. However, finding the optimal static price requires exact knowledge of system parameters; we will also show that dynamic pricing is much more robust to fluctuations in these parameters as compared to static prices. Thus, our work suggests that dynamic pricing does not necessarily buy more than static pricing, but rather, it lets rideshare platforms realize the benefits of optimal static pricing with imperfect knowledge of system parameters. | |||||

2016-03-25 | 12:00-01:00 | 318 Gross Hall, Duke | Ruta Mehta | A Market Model for Scheduling, with Applications to Cloud Computing | info |

A Market Model for Scheduling, with Applications to Cloud Computing We present a market for allocating and scheduling resources to agents who have specified budgets and need to complete specific tasks. Our market model is motivated by applications in cloud computing. Two important aspects required in this market are: (1) agents need specific amounts of each resource to complete their tasks, and (2) agents would like to complete their tasks as soon as possible. In incorporating these aspects, we arrive at a model that deviates substantially from market models studied so far in economics and theoretical computer science. Indeed, all known techniques developed to compute equilibria in markets in the last decade and half seem not to apply here. We give a polynomial time algorithm for computing an equilibrium using a new technique that is somewhat reminiscent of the ironing procedure used in the characterization of optimal auctions by Myerson. This is despite of the fact that the set of equilibrium prices could be non-convex; in fact it could have \"holes\". Additionally we show that the market satisfies first welfare theorem. We also show that our market mechanism is incentive compatible when the agents only care about their flow time. A small modification to the payments, keeping the allocation the same, makes the entire mechanism incentive compatible for the setting in which agents want to first minimize flow time and subject to that, minimize their payments. This is complemented by showing impossibility of an incentive compatible mechanism in the quasi-linear model. Joint work with Nikhil R. Devanur, Jugal Garg, Vijay V. Vazirani, and Sadra Yazdanbod | |||||

2015-09-09 | 12:00-01:00 | Gross Hall 318, Duke | Bo Waggoner | Low-Cost-Learning via Active Data Procurement | info |

Low-Cost-Learning via Active Data Procurement I'll talk about a problem of growing interest at the interface of theoretical machine learning and economics: purchasing data from strategic agents for a machine learning task such as classification or regression. The goal is to design a mechanism that purchases the data, subject to a budget constraint and incentive-compatibility, and prove machine learning guarantees (e.g. \"regret\" or \"risk\" bounds). I'll briefly overview literature in this area and present our model/results. We consider a tweak to the classic model of online \"no regret\" learning in which data points, arriving one by one, are each held by a strategic agent who must be compensated for providing it. A mechanism must post prices in order to purchase the data. Our mechanism interfaces with a broad class of learning algorithms and achieves low regret as a function of budget. These results also imply learning guarantees for the \"statistical\" setting in which data points are drawn from a distribution (rather than being adversarial/worst case). We'll cover some of the interesting technical ideas, but the talk will be accessible to all backgrounds (I promise!). The paper is available at http://arxiv.org/abs/1502.05774 and is joint work with Jake Abernethy, Yiling Chen, and CJ Ho. | |||||

2015-07-22 | 12:00-01:00 | Gross Hall 304B, Duke | Lance Fortnow | Bounding Rationality by Computational Complexity | info |

Bounding Rationality by Computational Complexity Traditional microeconomic theory treats individuals and institutions of completely understanding the consequences of their decisions given the information they have available. These assumptions may not be valid as we might have to solve hard computational problems to optimize our choices. What happens if we restrict the computational power of economic agents? There has been some work in economics treating computation as a fixed cost or simply considering the size of a program. This talk will explore a new direction bringing the rich tools of computational complexity into economic models, a tricky prospect where even basic concepts like \"input size\" are not well defined. We show how to incorporate computational complexity into a number of economic models including game theory, prediction markets, forecast testing, preference revelation and awareness. This talk will not assume any background in either economics or computational complexity. | |||||

2015-07-22 | 11:00-12:00 | Gross Hall 304B, Duke | Sam Ganzfried | Computing Strong Game-Theoretic Strategies in Large Games | info |

Computing Strong Game-Theoretic Strategies in Large Games Important problems in nearly all disciplines and on nearly all application domains involve multiple agents behaving strategically. Examples include deploying officers to protect ports, determining optimal thresholds to protect against phishing attacks, and finding robust policies for diabetes management. Such problems are modeled under the framework of game theory. In many important games there is information that is private to only some agents and not available to other agents -- for instance, in auctions each bidder may know his own valuation and only know the distribution from which other agents' valuations are drawn. My research designs, analyzes, and implements new approaches for agents in large imperfect-information games. There are several major challenges that must be confronted. First, standard solution concepts such as Nash equilibrium lack theoretical justification in certain classes (e.g., games with more than two players). Second, computing these concepts is difficult in certain classes from a complexity-theoretic perspective. Third, computing these concepts is difficult in practice for many interesting games even for cases when they are well-motivated and polynomial-time algorithms exist (e.g., two player zero-sum (competitive) games), due to enormous state spaces. And fourth, for all game classes, it is not clear if the goal should be to approximate a Nash equilibrium; one could achieve significantly higher payoff by learning to exploit opponents' mistakes. However, such exploitation must be done in a way that does not open oneself up to being exploited in turn by strong deceptive opponents. While the approaches I have developed are domain independent, much of my research has been motivated by and applied to the domain of poker. Poker has emerged as a major AI challenge problem. Poker is not simply a toy game; it is tremendously popular for humans, and online poker is a multi-billion dollar industry. For the past ten years, there has been a competition between the strongest computer poker agents held annually at the top AI conference. The version of two-player no-limit Texas hold 'em played in the competition has approximately 10^165 states in its game tree. As part of my research, I created an agent for two-player no-limit Texas hold 'em that is currently the strongest agent in the world: it beat each opposing agent with statistical significance in the 2014 AAAI computer poker competition. An improved version called Claudico also competed against the strongest human specialists in 2015. | |||||

2015-07-20 | 11:30-12:30 | Gross Hall 304B, Duke | Darrell Hoy | Price of Anarchy Bounds for Theoretical and Empirical Auctions | info |

Price of Anarchy Bounds for Theoretical and Empirical Auctions Analyzing auctions can be challenging in asymmetric settings: for instance, Vickrey [1961] left the analytical characterization of equilibrium in the two bidder first price auction with uniform distributions as an open question; equilibrium remained uncharacterized until 2012. I will discuss a framework for analyzing welfare and revenue in auctions without needing to solve for equilibrium, via Price of Anarchy analysis. The talk will begin with a theoretical framework that reduces the problem of generating welfare and revenue bounds to analyzing one property of the rules of an auction: revenue covering. This approach gives a simple proof for welfare guarantees, and enables the first revenue approximation results for the first-price and all-pay auctions in asymmetric settings. We will then leverage this theoretical framework for empirical pursuits with a key simple observation: we can estimate the revenue covering of an auction from data. We apply this to advertising auctions on BingAds to get robust empirical welfare guarantees. | |||||

2015-07-17 | 12:00-01:00 | Gross Hall 304B, Duke | Nisarg Shah | Leximin Allocations in the Real World | info |

Leximin Allocations in the Real World As part of a collaboration with a major California school district, we study the problem of fairly allocating unused classrooms in public schools to charter schools. Our approach revolves around the randomized leximin mechanism. Previous work showed that in a much simpler domain, the leximin mechanism is proportional, envy-free, efficient, and group strategyproof. We extend this result to a broad domain that not only subsumes our classroom allocation setting, but also a number of other settings studied in the literature. Consequently, our result generalizes various results from resource allocation, cake-cutting, and kidney exchange. We provide worst-case guarantees regarding the performance of the leximin mechanism with respect to two natural quantitative objectives. Our experiments, which are based on real data, show that the performance in practice is significantly better than the worst-case guarantees. Our nontrivial implementation of the leximin mechanism scales gracefully in terms of running time (even though the problem is intractable in theory). At the end of the talk, I will discuss issues related to the deployment of our approach. Joint work with David Kurokawa and Ariel Procaccia. | |||||

2015-05-15 | 12:00-01:00 | 304B Gross Hall, Duke | Shaddin Dughmi | Algorithmic Bayesian Persuasion | info |

Algorithmic Bayesian Persuasion Persuasion is intrinsic to many human endeavors. For example, merchants persuade customers through advertising campaigns which selectively highlight a product's positive attributes and obscure its defects, lawyers persuade judges and juries through oral arguments which selectively highlight some pieces of evidence and downplay others, and political candidates persuade voters by framing the debate on issues of the day in order to further their political aspirations. In all these cases, and many others, persuasion can be viewed as the act of exploiting an informational advantage in order to effect change in the decisions of others through communication. For such communication to be credible, and hence persuasive, it must be grounded in truth, and yet it may cherry-pick the truths revealed while obscuring others. The ubiquity of persuasive communication has spawned a vast literature concerned with understanding and exploring the space of information structures in strategic interactions. Perhaps no model is more basic and fundamental than the Bayesian Persuasion model of Kamenica and Gentzkow. Here there are two players, a sender and a receiver. The receiver must take one of a number of actions with a-priori unknown payoff. The sender, on the other hand, has access to additional information regarding the payoffs of the various actions for both players. The sender can commit to revealing a noisy signal regarding the realization of the payoffs of various actions, and in doing so would like to maximize her payoff in expectation assuming that the receiver rationally acts to maximize his own payoff. This paper studies the optimization task facing the sender, that of designing a near optimal signaling scheme, from an algorithmic perspective. Despite being the fundamental building block for the study of signaling in strategic settings, the Bayesian Persuasion problem has not been examined algorithmically prior to our work. A-priori, it is unclear whether the sender's optimal signaling strategy can be succinctly represented, let alone efficiently computed. Somewhat surprisingly, we nevertheless discover that optimal or near optimal signaling schemes can be computed efficiently --- in time polynomial in the number of actions --- under fairly general conditions. We consider two models: one in which action payoffs are i.i.d from an explicitly-described marginal distribution, and another in which the joint distribution of action payoffs is arbitrary and given by a black-box sampling oracle. We show two results for the case in which the payoffs of different actions are i.i.d and given explicitly: a polynomial-time (exact) algorithmic solution, and a \"simple\" (1 − 1/e) approximation algorithm. Both results hinge on a \"symmetry characterization\" of the optimal signaling scheme. The former result also involves a connection to auction theory, and in particular to Border's characterization of the space of feasible reduced-form auctions. We then consider general distributions given by a black-box sampling oracle. We show that, even in this very general setting, the persuasion problem admits a fully polynomial-time approximation scheme (FPTAS) in a bi-criteria sense. As our main technical tool to prove this theorem, we study the algorithmics of a natural and abstract question on vector spaces, which we term dispersion testing: Given two distributions A and B supported on a finite-dimensional vector space, decide whether A is a mean-preserving spread of B, and if so compute the inverse of the associated spreading map. We employ tools from convex optimization and optimal transport theory to design an approximate test for the dispersion testing problem, and relate the persuasion problem to dispersion testing via a randomized Turing reduction employing the Ellipsoid method. This is joint work with Haifeng Xu. | |||||

2015-04-24 | 12:00-01:00 | 304B Gross Hall, Duke | Mallesh Pai | Do Online Social Networks Increase Welfare? | info |

Do Online Social Networks Increase Welfare? We consider a strategic online social network that controls information flows between agents in a social learning setting. Agents on the network select among products of competing firms of unknown quality. The network sells advertising to firms. We consider display advertising, which is standard firm to consumer advertising, and social advertising, in which agents who purchased that firm's product are highlighted to their friends. We show that in equilibrium, information is unbiased relative to a setting with no advertising. However, the network reduces the information agents see about others' purchases, since this increases advertising revenue. Hence consumer welfare is lower than in the first best. | |||||

2015-04-17 | 12:00-01:00 | 304B Gross Hall, Duke | Jimmy Yang | Optimization in Online Advertising | info |

Optimization in Online Advertising As a fast growing business, online advertising has become the largest media channel since 2013. With the benefits of flexible and dynamic format, ability to track and measure user response, global coverage and real-time delivery, it creates increasingly higher value for advertisers and publishers. Online advertising is an ecosystem composed of users, publishers and advertisers, who closely interact with each other. Each party has their own goals participating in the ecosystem, where optimization plays a critical role to reach the goals. In this talk, we will first briefly review the characteristics and trend of online advertising. Then we will present some of the important optimization problems, from the perspectives of the publishers and advertisers respectively. In general, the publishers are interested in maximizing their revenue given available inventory of supply, while the advertisers are interested in maximizing their ROI (Return On Investment) given available budget. These problems are modeled in different ways and solved to provide both online and offline solutions that guide the business practice. The talk will be concluded with a summary and discussion of potential research opportunities. | |||||

2015-04-10 | 12:00-01:00 | 304B Gross Hall, Duke | Nikhil Devanur | Simple Auctions with Simple Strategies | info |

Simple Auctions with Simple Strategies We introduce single-bid auctions as a new format for combinatorial auctions. In single-bid auctions, each bidder submits a single real-valued bid for the right to buy items at a fixed price. Contrary to other simple auction formats, such as simultaneous or sequential single-item auctions, bidders can implement no-regret learning strategies for single-bid auctions in polynomial time. Price of anarchy bounds for correlated equilibria concepts in single-bid auctions therefore have more bite than their counterparts for auctions and equilibria for which learning is not known to be computationally tractable (or worse, known to be computationally intractable). To this end, we show that for any subadditive valuations the social welfare at equilibrium is an O(log m)-approximation to the optimal social welfare, where m is the number of items. We also provide tighter approximation results for several subclasses. Our welfare guarantees hold for Nash equilibria and no-regret learning outcomes in both Bayesian and complete information settings. Joint work with Jamie Morgenstern, Vasilis Syrgkanis and Matt Weinberg. | |||||

2015-04-03 | 12:00-01:00 | 304B Gross Hall, Duke | Jens Witkowski | Robust Peer Prediction Mechanisms | info |

Robust Peer Prediction Mechanisms Peer prediction mechanisms are payment rules that seek to elicit private information from rational agents without the requirement that ground truth is eventually revealed. For example, a public health program may require participants to report whether they have ever used illegal drugs, and participants may lie about their drug abuse because of shame or concerns about not being eligible for the program. Whether or not participants have in fact used illegal drugs is private information that is never revealed, and the participants' reports can thus not be verified. The classical peer prediction method [Miller et al. 2005] provides an approach to these settings. It does, however, critically rely on the assumption that all participants share the same prior beliefs, and that the mechanism knows these beliefs. We relax these common knowledge assumptions. I will first present our Robust Bayesian Truth Serum (RBTS), which asks participants for two types of reports: the report of the private information it is interested in and a prediction report corresponding to a participant's belief about the private information of other participants. RBTS dispenses with the assumption that the participants' prior beliefs need to be known to the mechanism. RBTS does, however, still rely on the participants sharing the same prior beliefs. Our second contribution is the design of subjective-prior peer prediction mechanisms, which further reduce the assumption of common knowledge. As in RBTS, they do not require knowledge of the participants' prior beliefs. Moreover, they allow the participants' prior beliefs to be subjective, i.e. different from one another. | |||||

2015-03-20 | 12:00-01:00 | 304B Gross Hall, Duke | R. Ravi | Expertise in Online Markets | info |

Expertise in Online Markets We examine the effect of the presence of knowledgeable buyers (experts) in online markets where auctions with a hard close and posted prices are widely used. We model buyer expertise as the ability to accurately predict the quality, or condition, of an item. In auctions with a hard close, sniping--submitting bids in the last minute--emerges as an equilibrium strategy for experts. We show that non-experts bid more aggressively as the proportion of experts increases. As a consequence, we establish that the auction platform may obtain a higher revenue by (i) enforcing a hard close and allowing sniping, and (ii) withholding information regarding the quality of the item. Moreover, in online markets where both auctions and posted prices are available, we show that the presence of experts allows the sellers of high quality items to signal their quality by choosing to sell via auctions. Joint ongoing work with Stelios Despotakis (CMU), Isa Hafalir (CMU) and Amin Sayedi (U Washington). A copy of the working paper is available at http://www.contrib.andrew.cmu.edu/~ravi/experts.pdf | |||||

2015-03-06 | 12:00-01:00 | 304B Gross Hall, Duke | Ashish Goel | Crowdsourced Democracy: Algorithms, Mechanisms, and Platforms | info |

Crowdsourced Democracy: Algorithms, Mechanisms, and Platforms YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But Washington has no such counterpart; there are no online alternatives for making democratic decisions at large scale as a society. As opposed to building consensus and compromise, public discussion boards often devolve into flame wars when dealing with contentious socio-political issues. In this talk, we will describe some of the challenges in and approaches for solving this broad problem. We will first describe an opinion formation process that leads to polarization. We will then describe our work on participatory budgeting and knapsack voting. All budget problems are knapsack problems at their heart, since the goal is to pack as much \"societal value\" into a \"spending capacity\". We will describe our experience with implementing knapsack voting in Chicago's 49th ward and Vallejo, and show protocols that are weakly strategy-proof and amenable to efficient aggregation. We will also show that comparison voting can elicit an approximate Condorcet winner efficiently when one exists, and describe our experience with comparison-based voting in an experiment in Finland. We will finally describe a mechanism called Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and strategy-proof in fairly general settings, whereas no pair-wise deliberation process can have the same properties. This is joint work with Tanja Aitamurto, Pranav Dandekar, Anilesh Krishnaswamy, Helene Landermore, David Lee, and Sukolsak Sakshuwong. | |||||

2015-02-27 | 12:00-01:00 | 304B Gross Hall, Duke | David Pennock | Designing Wagering Mechanisms | info |

Designing Wagering Mechanisms In one online contest, the average prediction of 2231 people beat all but 0.3% of individual players. This is a striking, yet typical, example of the wisdom of crowds. I will discuss the design of wagering mechanisms to elicit probabilities from as many experts as possible for as little cost as possible, to maximize accuracy. Dozens of mechanisms have been proposed, from the parimutuel betting system invented in 1867 to our latest No-Arbitrage Wagering Mechanism (2014). Taking an axiomatic approach, I discuss how NAWM is able to maintain many of the most desirable properties for wagering mechanisms, including incentive compatibility, individual rationality, sybilproofness, and weak budget balance, while in many settings actually earning a profit for the market institution. If I have time, I will talk about the Microsoft Prediction Lab, our effort to crowdsource literally billions or more combinatorial predictions. | |||||

2015-02-13 | 12:00-01:00 | 304B Gross Hall, Duke | Francesca Rossi | Restricted manipulation in iterative voting: convergence, Condorcet efficieny, and Borda score | info |

Restricted manipulation in iterative voting: convergence, Condorcet efficieny, and Borda score When a group of agents needs to take a collective decision, e.g., to coordinate their action, a voting rule can be used to aggregate their individual preferences into a collective choice. In similar situations agents can act strategically and manipulate the election by misrepresenting their preferences and change the outcome in their favor. While manipulation is traditionally considered a bad behavior to be avoided, in this paper we show that this form of strategic reasoning can yield beneficial results in the setting of iterative voting. We study sequences of elections where at each step an individual is allowed to manipulate by modifying his ballot according to a set of strategic moves. We prove convergence of this process for several combinations of voting rules and strategic moves, and we present experiments that show that the winner after iteration is often a better candidate than the initial one. For all cases under exam, we also analyze the computational and axiomatic properties of the iterative process considered as a one-stage voting rule. | |||||

2015-02-06 | 12:00-01:00 | 304B Gross Hall, Duke | Thanh Nguyen | Near Feasible Stable Matchings with Complementarities | info |

Near Feasible Stable Matchings with Complementarities The National Resident Matching program strives for a stable matching of medical students to teaching hospitals. With the presence of couples, stable matchings need not exist. For any student preferences, we show that each instance of a stable matching problem has a `nearby' instance with a stable matching. The nearby instance is obtained by perturbing the capacities of the hospitals. Our approach is general and applies to other type of complementarities, as well as matchings with side constraints and contracts. Joint work with Rakesh Vohra. | |||||

2014-12-12 | 12:00-01:00 | Gross Hall 304B, Duke | Bob Day | Combinatorial Auctions, Duality, and the Core | info |

Combinatorial Auctions, Duality, and the Core Combinatorial auctions allow bidders to bid on packages or combinations of items, providing a more expressive notion of bidding, and protecting bidders from certain exposure problems. The \"winner-determination problem\" faced by the bid-taker is typically NP-hard, and with several independent strategic agents, the design of combinatorial auctions has provided a rich environment for technical research in recent years. Combinatorial auctions and their closely related generalization, combinatorial exchanges (which have several buyers and several sellers), have design limitations based on a few well-known impossibility theorems. This means that most practical applications of these concepts involve a fundamental compromise, and I will discuss in detail one such practical compromise, the \"core-selecting\" combinatorial auction, which has been used in several recent spectrum auctions in Europe, Australia, and Canada. Future directions in this area involve the generation of meaningful dual prices to the winner-determination integer program, and I will discuss some of the relationships among auctions, duality, and the core. | |||||

2014-11-21 | 12:00-01:00 | 304-B Gross Hall, Duke | Yaron Singer | Influence Maximization through Adaptive Seeding | info |

Influence Maximization through Adaptive Seeding Throughout the past decade there has been extensive research on the problem of influence maximization in social networks: how does one target a small subset of individuals who will trigger a large word-of-mouth cascade in the network? In this talk we will introduce a new paradigm for influence maximization, called adaptive seeding. The framework is designed to dramatically improve influence maximization by leveraging a remarkable structural phenomenon in social networks known as the \"friendship paradox\" (or \"your friends have more friends than you\"), and presents a novel optimization model. The optimization model may be of independent interest to those curious about stochastic optimization, submodular maximization, and machine learning. We will discuss key algorithmic ideas and some experimental results of adaptive seeding. | |||||

2014-11-14 | 11:45-04:00 | Gross Hall 330, Duke | CS-ECON mini-retreat | info | |

CS-ECON mini-retreat | |||||

2014-11-07 | 12:00-01:00 | 304-B Gross Hall, Duke | Matt Weinberg | Simple mechanisms for complex markets | info |

Simple mechanisms for complex markets We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and his value for a set of items is additive. The seller aims to maximize his revenue. It is known that an optimal mechanism in this setting may be quite complex, requiring randomization [HR12] and menus of infinite size [DDT13]. Hart and Nisan [HN12] have initiated a study of two very simple pricing schemes for this setting: item pricing, in which each item is priced at its monopoly reserve; and bundle pricing, in which the entire set of items is priced and sold as one bundle. Hart and Nisan [HN12] have shown that neither scheme can guarantee more than a vanishingly small fraction of the optimal revenue. In sharp contrast, we show that for any distributions, the better of item and bundle pricing is a 6-approximation to the optimal revenue. We further discuss extensions to multiple buyers and to valuations that are correlated across items. Joint work with Moshe Babaioff, Nicole Immorlica and Brendan Lucier. | |||||

2014-10-24 | 12:00-01:00 | Gross Hall 304B, Duke | Ariel Procaccia | Computational Fair Division: From Cake Cutting to Cloud Computing | info |

Computational Fair Division: From Cake Cutting to Cloud Computing I will present an exciting new interaction between computer science and fair division theory. On the one hand, I will show how computational thinking provides a novel perspective on classic problems in fair division, such as cake cutting and estate division. On the other hand, I will explain how fair division theory is informing the allocation of computational resources. I will also talk about the integration of some of these theoretical ideas into Spliddit, a not-for-profit fair division website that aims to make the world just a bit fairer. | |||||

2014-10-10 | 12:00-01:00 | 304-B Gross Hall, Duke | Yash Kanoria | Unbalanced random matching markets: the stark effect of competition | info |

Unbalanced random matching markets: the stark effect of competition We study bilateral matching markets such as marriage markets, labor markets, and school/college admissions, that allow participants to form partnerships with each other for mutual benefit. We study competition in matching markets with random heterogeneous preferences. First, we find that such markets are extremely competitive. In unbalanced markets, each agent on the short side of the market is matched to one of his top preferences and each agent on the long side does almost no better than being matched to a random partner. Second, we find that even the slightest imbalance leads to competition that yields an essentially unique stable matching. Our results suggest that any matching market is likely to have a small core, explaining why empirically small cores are ubiquitous. We obtain broadly similar results in random assignment markets (matching markets with transfers). Joint work with Itai Ashlagi and Jacob Leshno; and Daniela Saban and Jay Sethuraman. Paper links: | |||||

2014-09-26 | 12:00-01:00 | Gross Hall 304B, Duke | Elliot Anshelevich | Stable Matching, Friendship and Altruism | info |

Stable Matching, Friendship and Altruism We will discuss both integral and fractional versions of \"correlated stable matching\" problems. Each player is a node in a social network and strives to form a good match with a neighboring player; the player utilities from forming a match are correlated. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. We especially focus on networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions. When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly decreases the price of stability, while the price of anarchy remains unaffected. Furthermore, a good stable matching achieving the price of stability bound always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., \"caring about your friends\") can make an even larger difference, and greatly reduce the price of anarchy. Finally, we extend most of our results to network contribution games, in which players can decide how much effort to contribute to each incident edge, instead of simply choosing a single node to match with. | |||||

2014-04-18 | 12:00-01:00 | Gross Hall 304B, Duke | Asu Ozdaglar | Network Security and Contagion | info |

Network Security and Contagion This paper develops a theoretical model of investments in security in a network of interconnected agents. The network connections introduce the possibility of cascading failures depending on exogenous or endogenous attacks and the profile of security investments by the agents. The general presumption in the literature, based on intuitive arguments or analysis of symmetric networks, is that because security investments create positive externalities on other agents, there will be underinvestment in security. We show that this reasoning is incomplete because of a first-order economic force: security investments are also strategic substitutes. In a general (non-symmetric) network, this implies that underinvestment by some agents will encourage overinvestment by others. We demonstrate by means of examples that not only there will be overinvestment by some agents but also aggregate probabilities of infection can be lower in equilibrium than in the social optimum. We then provide sufficient conditions for underinvestment. This requires both sufficiently convex cost functions (just convexity is not enough) and networks that are either symmetric or locally tree-like (i.e., either trees or in the case of stochastic networks, without local cycles with high probability). We also characterize the impact of network structure on equilibrium and optimal investments. Finally, we show that when the attack location is endogenized (by assuming that the attacker chooses a probability distribution over the location of the attack in order to maximize damage), there is another reason for overinvestment: greater investment by an agent shifts the attack to other parts of the network. This is joint work with Daron Acemoglu and Azarakhsh Malekian. | |||||

2014-04-11 | 12:00-01:00 | Gross Hall 304B, Duke | Chris LaSala | The Rise of Programmatic Advertising & Its Impact on Content Creators & Publishers | info |

The Rise of Programmatic Advertising & Its Impact on Content Creators & Publishers The automated, auction based buying and selling of advertising has worked well for performance based advertising over the past 15 years, particularly with search engine and contextual targeting marketing. But in the last five years, the rise of \"programmatic buying\" is finding its way to 'brand' advertising, and with it having a material impact on the business models of content producers who traditionally rely heavily on direct sold sponsorship & negotiated CPM buys. Chris will lead a discussion that highlights several of the key trends in programmatic advertising, including programmatic 'direct', the mobile app phenomenon and view-ability, among others, and how these trends are shaping the decisions content producers are faced with as they navigate yet another disruptive force in media. | |||||

2014-03-21 | 12:00-01:00 | Gross Hall 304B, Duke | Nicole Immorlica | Social Status and Badge Design | info |

Social Status and Badge Design Many websites rely on user-generated content to provide value to consumers. These websites often incentivize participation by awarding users badges based on their contributions. In this paper, we consider the design of badge mechanisms that maximize total contributions. Users exert costly effort to make contributions and, in return, are awarded with badges. A badge is only valued to the extent that it signals social status and thus badge valuations are determined endogenously by the number of users who earn each badge. The goal of this paper is to study the design of optimal and approximately badge mechanisms under these status valuations. We characterize badge mechanisms by whether they use a coarse partitioning scheme, i.e. awarding the same badge to many users, or use a fine partitioning scheme, i.e. awarding a unique badge to most users. We find that the optimal mechanism uses both fine partitioning and coarse partitioning. When status valuations exhibit a decreasing marginal value property, we prove that coarse partitioning is a necessary and sufficient feature of any approximately optimal mechanism. Conversely, when status valuations exhibit an increasing marginal value property, we prove that fine partitioning is necessary and sufficient for approximate optimality. Joint work with Greg Stoddard and Vasilis Syrgkanis. | |||||

2014-03-07 | 12:00-01:00 | Gross Hall 304B, Duke | Ruggiero Cavallo | Efficient Auctions with Public Budget Constraints | info |

Efficient Auctions with Public Budget Constraints In most real-world auction settings, bidders do not have unlimited money to spend on acquiring the goods they seek. In this talk I'll address how to maximize efficiency when bidders are subject to arbitrary publicly-known budget constraints. I'll start by presenting the optimal budget-feasible auction among those that make allocation decisions deterministically as a function of bids; notably, maximizing expected efficiency requires sometimes not allocating the good to an agent that is willing and able to pay more than all others. Then, expanding our scope to randomized mechanisms, I will introduce an auction that is optimal among all those that make allocation decisions based on the rank of each bid. Finally, I'll describe a simple auction that converges to perfect efficiency as the population size gets large, showing that budgets do not pose a significant obstacle to allocative efficiency in settings with lots of bidders. | |||||

2014-02-21 | 12:00-01:00 | Gross Hall 304B, Duke | Pablo A. Parrilo | Rank/Sparsity Minimization and Latent Variable Graphical Model Selection | info |

Rank/Sparsity Minimization and Latent Variable Graphical Model Selection Suppose we have a Gaussian graphical model with sample observations of only a subset of the variables. Can we separate the extra correlations induced due to marginalization over the unobserved, hidden variables from the structure among the observed variables? In other words, is it still possible to consistently perform model selection despite the latent variables? As we shall see, the key issue here is to decompose the concentration matrix of the observed variables into a sparse matrix (representing graphical model structure among the observed variables) and a low-rank matrix (representing the effects of marginalization over the hidden variables). This estimator is given by a tractable convex program, and it consistently estimates model structure in the high-dimensional regime in which the number of observed/hidden variables grow with the number of samples of the observed variables. In our analysis the algebraic varieties of sparse matrices and low-rank matrices play an important role. Based on joint work with Venkat Chandrasekaran (Caltech) and Alan Willsky (MIT). | |||||

2013-11-22 | 12:00-01:00 | Gross Hall 304B, Duke | Ozan Candogan | Optimal pricing in networks with externalities | info |

Optimal pricing in networks with externalities We study the optimal pricing strategies of a monopolist selling a divisible good (service) to consumers who are embedded in a social network. A key feature of our model is that consumers experience a (positive) local network effect. In particular, each consumer's usage level depends directly on the usage of her neighbors in the social network structure. Thus, the monopolist's optimal pricing strategy may involve offering discounts to certain agents who have a central position in the underlying network. Our results can be summarized as follows. First, we consider a setting where the monopolist can offer individualized prices and derive a characterization of the optimal price for each consumer as a function of her network position. In particular, we show that it is optimal for the monopolist to charge each agent a price that consists of three components: (i) a nominal term that is independent of the network structure, (ii) a discount term proportional to the inluence that this agent exerts over the rest of the social network (quantitied by the agent's Bonacich centrality), and (iii) a markup term proportional to the influence that the network exerts on the agent. In the second part of the paper, we discuss the optimal strategy of a monopolist who can only choose a single uniform price for the good and derive an algorithm polynomial in the number of agents to compute such a price. Third, we assume that the monopolist can offer the good in two prices, full and discounted, and we study the problem of determining which set of consumers should be given the discount. We show that the problem is NP-hard; however, we provide an explicit characterization of the set of agents who should be offered the discounted price. Next, we describe an approximation algorithm for finding the optimal set of agents. We show that if the profit is nonnegative under any feasible price allocation, the algorithm guarantees at least 88% of the optimal profit. Finally, we highlight the value of network information by comparing the profits of a monopolist who does not take into account the network effects when choosing her pricing policy to those of a monopolist who uses this information optimally. Joint work with Asuman Ozdaglar (MIT EECS), and Kostas Bimpikis (Stanford GSB). http://pubsonline.informs.org/doi/pdf/10.1287/opre.1120.1066 | |||||

2013-11-15 | 12:00-01:00 | Gross Hall 304B, Duke | Santiago Balseiro | Auctions for Online Display Advertising Exchanges: Approximations and Design | info |

Auctions for Online Display Advertising Exchanges: Approximations and Design Ad Exchanges are emerging Internet markets where advertisers may purchase display ad placements, in real-time and based on specific viewer information, directly from publishers via a simple auction mechanism. Advertisers join these markets with a pre-specified budget and participate in multiple second-price auctions over the length of a campaign. This paper studies the competitive landscape that arises in Ad Exchanges and the implications for publishers' decisions. The presence of budgets introduces dynamic interactions among advertisers that need to be taken into account when attempting to characterize the bidding landscape or the impact of changes in the auction design. To this end, we introduce the novel notion of a Fluid Mean Field Equilibrium (FMFE) that is behaviorally appealing, computationally tractable, and in some important cases yields a closed-form characterization. We establish that a FMFE approximates well the rational behavior of advertisers in these markets. We then show how this framework may be used to provide sharp prescriptions for key auction design decisions that publishers face in these markets. In particular, we show that ignoring budgets, a common practice in this literature, can result in significant profit losses for the publisher when setting the reserve price. Joint work with Omar Besbes and Gabriel Weintraub from Columbia Business School. Paper available at: http://ssrn.com/abstract=2149319 | |||||

2013-11-08 | 12:00-01:00 | Gross Hall 304B, Duke | Amin Sayedi | Competitive Poaching in Sponsored Search Advertising and Strategic Impact on Traditional Advertising | info |

Competitive Poaching in Sponsored Search Advertising and Strategic Impact on Traditional Advertising An important decision for a firm is how to allocate its advertising budget among different types of advertising. Most traditional channels of advertising, such as advertising on television and in print, serve the purpose of building consumer awareness and desire about the firm's products. With recent developments in technology, sponsored search (or paid search) advertising at search engines in response to a keyword searched by a user has become an important part of most firms' advertising efforts. An advantage of sponsored search advertising is that, since the firm advertises in response to a consumer-initiated search, it is a highly targeted form of communication and the sales-conversion rate is typically higher than in traditional advertising. However, a consumer would search for a specific product or brand only if she is already aware of the same due to previous awareness-generating traditional advertising efforts. Moreover, competing firms can use sponsored search to free-ride on the awareness-building efforts of other firms by directly advertising on their keywords and therefore \"poaching\" their customers. Anecdotal evidence shows that this is a frequent occurrence. In other words, traditional advertising builds awareness, while sponsored search is a form of technology-enabled communication that helps to target consumers in a later stage of the purchase process, which induces competitors to poach these potential customers. Using a game theory model, we study the implications of these tradeoffs on the advertising decisions of competing firms, and on the design of the sponsored search auction by the search engine. We find that symmetric firms may follow asymmetric advertising strategies, with one firm focusing on traditional advertising and the other firm focusing on sponsored search with poaching. Interestingly, the search engine benefits from discouraging competition in its own auctions in sponsored search advertising by shielding firms from competitors' bids. This explains the practice of employing \"keyword relevance\" measures, under which search engines such as Google, Yahoo! and Bing under-weight the bids of firms bidding on competitors' keywords. We also obtain various other interesting insights on the interplay between sponsored search advertising and traditional advertising. Joint with K. Jerath and K. Srinivasan | |||||

2013-10-25 | 12:00-01:00 | Gross Hall 304B, Duke | Vincent Conitzer | Tearing Down the Wall Between Mechanism Design With and Without Money | info |

Tearing Down the Wall Between Mechanism Design With and Without Money Many mechanism designers (algorithmic or other) draw a sharp line between mechanism design with money (auctions, exchanges, ...) and without money (social choice, matching, ...). I will discuss two papers that indicate that this line is blurrier than it seems. In the first, we study generalizations of the Vickrey auction to settings where a single agent wins, but with an arbitrary contract instead of a simple payment. In the second, we study repeated allocation of a good without payments. Here, we can create a type of artificial currency that affects future assignment of the good and that allows us to use modified versions of existing mechanisms with payments. Based on: B. Paul Harrenstein, Mathijs M. de Weerdt, and Vincent Conitzer. Strategy-Proof Contract Auctions and the Role of Ties. To appear in Games and Economic Behavior. Mingyu Guo, Vincent Conitzer, and Daniel Reeves. Competitive Repeated Allocation Without Payments. Short version appeared in Proceedings of the Fifth Workshop on Internet and Network Economics (WINE-09), pp. 244-255, Rome, Italy, 2009. | |||||

2013-09-13 | 12:00-01:00 | 304-B Gross Hall, Duke | Chris Wilkens | Utility-Target Auctions | info |

Utility-Target Auctions The first-price auction is popular in practice for its simplicity and transparency. Moreover, its potential virtues grow in complex settings where incentive compatible auctions may generate little or no revenue. Unfortunately, generalizing the first-price auction has proven fragile in theory and practice. We show that the auctioneer's choice of bidding language is critical when generalizing beyond the single-item setting, and we propose a specific construction called the utility-target auction that performs well. The utility-target auction includes a bidder's final utility as an additional parameter, identifying the single dimension along which she wishes to compete. This auction is closely related to profit-target bidding in first-price and ascending proxy package auctions and gives strong performance guarantees for a variety of complex auction environments. We also take a dynamic approach to studying pay-your-bid auctions: rather than basing performance guarantees solely on static equilibria, we study the repeated setting and show that robust performance guarantees may be derived from simple axioms of bidder behavior. For example, as long as a loser raises her bid quickly, a standard first-price auction will generate at least as much revenue as a second-price auction. We generalize such ideas to complex pay-your-bid auctions through the utility-target auction: as long as losers do not wait too long to raise bids, a first-price auction will reach an envy-free state that implies a strong lower-bound on revenue; as long as winners occasionally experiment by lowering their bids, the outcome will near the boundary of this envy-free set so bidders do not overpay; and when players with the largest payoffs are the least patient, bids converge to the egalitarian equilibrium. Significantly, bidders need only know whether they are winning or losing in order to implement such behavior. Joint work with Darrell Hoy and Kamal Jain. | |||||

2013-04-26 | 12:00-01:00 | D344 LSRC, Duke | Robert Kleinberg | An Analysis of One-Dimensional Schelling Segregation | info |

An Analysis of One-Dimensional Schelling Segregation In 1969, economist Thomas Schelling introduced a landmark model of residential segregation in which individuals move out of neighborhoods where their ethnicity constitutes a minority. The model is influential both for its policy implications bearing on the causes of segregation and its possible remedies and for its theoretical implications, as a benchmark model exemplifying the emergence of macroscopic phenomena from random local interactions. Our work presents the first rigorous analysis of Schelling's original model, focusing on the case of one-dimensional ring networks. Our results call into question the conventional wisdom about Schelling segregation by showing that the average size of segregated regions scales only linearly in the size of each individual's neighborhood. This talk is based on joint work with Christina Brandt, Nicole Immorlica, and Gautam Kamath. | |||||

2013-04-05 | 12:00-01:00 | D344 LSRC, Duke | Costis Daskalakis | Revenue Optimization in Multi-Item Auctions | info |

Revenue Optimization in Multi-Item Auctions In his seminal paper, Myerson [1981] provides a revenue-optimal auction for a seller who is looking to sell a single item to multiple bidders. Extending this auction to simultaneously selling multiple heterogeneous items has been one of the central problems in Mathematical Economics. We provide such an extension that is also computationally efficient, using a combinatorial optimization approach. | |||||

2013-03-29 | 02:00-03:00 | Soc Phych 126, Duke | Herve Moulin | Loss Calibrated Methods for Bipartite Rationing Problems | info |

Loss Calibrated Methods for Bipartite Rationing Problems The standard problem of rationing a single overdemanded commodity has a natural bipartite extension with multiple types of a one-dimensional commodity (e.g., stored in different locations), and each agent can only consume some types of the resource (e.g., has only access to a subset of locations). We define the new standard loss calibrated rationing methods, that equalize across agents the ratio of shares to (calibrated) losses (demand minus share). We extend them to bipartite methods that 1) are not affected by the elimination of an edge and the corresponding flow (Consistency), and 2) treat resource types with identical connectivity as a single type. They are essentially the only standard methods with a bipartite extension meeting the two properties above. Most of the parametric methods discussed in the literature do not admit such extension. Joint work with JAY SETHURAMAN, Columbia University. | |||||

2013-03-22 | 12:00-01:00 | D344 LSRC, Duke | Yiling Chen | Information Elicitation Sans Verification | info |

Information Elicitation Sans Verification The recent advent of human computation -- employing groups of non-experts to solve problems -- has motivated study of a question in mechanism design: How do we elicit useful information when we are unable to verify reports? Existing methods, such as peer prediction and Bayesian truth serum, require assumptions either on the mechanism's knowledge about the participants or on the information structure of the participants in order to elicit private information. Meanwhile, however, there are simple methods such as the ESP Game that seem to require no such assumptions, yet have achieved success in practice. We attack this paradox from two directions. First, we give a broad formalization of the problem of information elicitation without verification and provide impossibility results for the general setting. Primarily, we show that, without assumptions on designer knowledge or participants' information, no mechanism in this setting is truthful. Second, we define and analyze the output agreement class of mechanisms, an extremely broad but simple class in which players are rewarded based on the metric distance between their reports. Output agreement makes no assumptions on designer knowledge or participants' information and thus cannot be \"truthful\". We resolve the paradox by showing that it is nonetheless useful: It elicits the correct answer according to the common knowledge among the players. This talk is based on joint work with Bo Waggoner. | |||||

2013-02-20 | 12:00-01:00 | 311 North Building, Duke | Taiki Todo | False-name-proof matching | info |

False-name-proof matching Matching a set of agents to a set of objects has many real applications. One well-studied framework is that of priority-based matching, in which each object is assumed to have a priority order over the agents. The Deferred Acceptance (DA) and Top-Trading-Cycle (TTC) mechanisms are the best-known strategy-proof mechanisms. However, in highly anonymous environments, the set of agents is not known a priori, and it is more natural for objects to instead have priorities over characteristics (e.g., the student's GPA or home address). In this paper, we extend the model so that each agent reports not only its preferences over objects, but also its characteristic. We derive results for various notions of strategy-proofness and false-name-proofness, corresponding to whether agents can only report weaker characteristics or also incomparable or stronger ones, and whether agents can only claim objects allocated to their true accounts or also those allocated to their fake accounts. Among other results, we show that DA and TTC satisfy a weak version of false-name-proofness. Furthermore, DA also satisfies a strong version of false-name-proofness, while TTC fails to satisfy it without an acyclicity assumption on apriorities. | |||||

2012-11-30 | 12:00-01:00 | McKinley seminar room at Fuqua, Duke | Tuomas Sandholm | Modern Dynamic Kidney Exchanges | info |

Modern Dynamic Kidney Exchanges In kidney exchanges, patients with kidney disease can obtain
compatible donors by swapping their own willing but incompatible
donors. The clearing problem involves finding a social welfare
maximizing set of non-overlapping short cycles. We proved this is
NP-hard. It was one of the main obstacles to a national kidney
exchange. We developed the first algorithm capable of clearing these
exchanges optimally on a nationwide scale. The key was incremental
problem formulation because the formulation that gives tight LP bounds
is too large to even store. On top of the branch-and-price paradigm we
developed techniques that dramatically improve runtime and memory
usage. Furthermore, clearing is actually an online problem where
patient-donor pairs and altruistic donors appear and expire over time.
We first developed trajectory-based online stochastic optimization
algorithms (that use our optimal offline solver as a subroutine) for
this. Such algorithms tend to be too compute-intensive at run time, so
recently we developed a general approach for giving batch algorithms
guidance about the future without a run-time hit. It learns potentials
of elements of the problem, and then uses them in each batch clearing.
I will share experiences from using our algorithms to run the UNOS
US-wide kidney exchange and two regional ones. We introduced several
design enhancements to the exchanges. For one, we used our algorithms
to launch the first never-ending altruistic donor chains. I will
present new results . both theoretical and experimental . on the role
of long chains. I will also discuss planning that is robust against
last-minute execution failures.
The talk covers material from the following papers: | |||||

2012-11-29 | 04:30-06:00 | 130A North Building, Duke | Tuomas Sandholm | Algorithms for Large Incomplete-Information Games | info |

Algorithms for Large Incomplete-Information Games Incomplete-information games - such as most auctions, negotiations, card games, and future security games - cannot be solved using minimax search; rather, totally different algorithms are needed. I will overview our work on incomplete-information games from the last seven years. I will start by discussing lossless and lossy abstraction algorithms. I will present brand new results on the first lossy abstraction algorithms with bounds on solution quality. I will then discuss first-order algorithms for finding an epsilon-equilibrium in two-person zero-sum games; one of these algorithms has O(log(1/epsilon)) convergence, which is exponentially faster than prior first-order methods. I will then discuss how qualitative knowledge of the structure of equilibrium can be used to enable and speed up equilibrium finding. I will finish with brand new results on safe opponent exploitation and strategy purification. We have applied the techniques discussed in this talk to poker, yielding some of the world's best poker-playing programs. | |||||

2012-09-21 | 12:00-01:00 | D344 LSRC, Duke | Angelina Vidali | Mechanism design; Auctions and Scheduling | info |

Mechanism design; Auctions and Scheduling I will give an introduction and present some of my recent results in one of the most fundamental problems in algorithmic game theory and mechanism design: the problem of scheduling unrelated machines to minimize the makespan. I will emphasize the connection between this problem and the problem of designing truthful auctions for selling multiple items. I will also present a geometrical characterization of truthfulness and also some very recent work on strongly truthful mechanisms. We assume that the machines behave like selfish players: they have to get paid in order to process the tasks, and would lie about their processing times if they could increase their utility in this way. The problem was proposed thirteen years ago in the seminal paper of Nisan and Ronen, where it was shown that the approximation ratio of mechanisms is between $2$ and $n$. I improve this to $1+\\sqrt{2}$ for three or more machines and to $1+\\varphi$ for many machines. I also characterize the class of truthful mechanisms for the case of two players (regardless of approximation ratio) and for the case of Smon and continuous n-player mechanisms and show how the result can be used as a black box to obtain characterizations for other domains. | |||||

2012-09-14 | 12:00-01:00 | D344 LSRC, Duke | Ashish Goel | Trust and Mistrust in Social Networks | info |

Trust and Mistrust in Social Networks Automated reputation and trust systems play an ever increasing role in the emerging networked society. We will first describe a model of networked trust that functions by exchange of IOUs among participants. Informally, every node acts as a bank and prints its own currency, which is then used to purchase services within the network. Such \"trust networks\" are robust to infiltration, since any node only accepts currency printed by other nodes that it directly trusts. We will analyze the liquidity of this model, i.e., the number of transactions that such a network can support. We will show that in many interesting cases, the liquidity of these trust networks is comparable to a system where currency is issued by a single centralized bank. We will also briefly describe some of our work to reduce polarization in social networks. | |||||

2012-09-07 | 12:00-01:00 | D344 LSRC, Duke | Troels Sorensen | On the Approximation Performance of Fictitious Play in Finite Games | info |

On the Approximation Performance of Fictitious Play in Finite Games We study the performance of Fictitious Play, when used as a heuristic for finding an approximate Nash equilibrium of a two-player game. We exhibit a class of two-player games having payoffs in the range [0; 1] that show that Fictitious Play fails to find a solution having an additive approximation guarantee significantly better than 1/2. Our construction shows that for n x n games, in the worst case both players may perpetually have mixed strategies whose payoffs fall short of the best response by an additive quantity 1/2-O(1/n^(1-delta) ) for arbitrarily small delta. We also show an essentially matching upper bound of 1/2-O(1/n). | |||||

2012-08-28 | 11:30-01:00 | 113 Social Science, Duke | Milind Tambe | Game theory for security: Key algorithmic principles, deployed systems, lessons learned | info |

Game theory for security: Key algorithmic principles, deployed systems, lessons learned Security is a critical concern around the world, whether its the challenge of protecting ports, airports and other critical national infrastructure, or protecting wildlife and forests, or suppressing crime in urban areas. In many of these cases, limited security resources prevent full security coverage at all times; instead, these limited resources must be scheduled, avoiding schedule predictability, while simultaneously taking into account different target priorities, the responses of the adversaries to the security posture and potential uncertainty over adversary types. Computational game theory can help design such unpredictable security schedules. Indeed, casting the problem as a Bayesian Stackelberg game,we have developed new algorithms that are now deployed over multiple years in multiple applications for security scheduling: at the Los Angeles International Airport (LAX), for the Federal Air Marshals(FAMS), for the US coast guard in Boston and New York (and potentially other ports); and applications are under evaluation for the TSA and for the Los Angeles Sheriffs department. These applications are leading to real-world use-inspired research in the emerging research area of security games; specifically, the research challenges posed by these applications include scaling up security games to large-scale problems, handling significant adversarial uncertainty, dealing with bounded rationality of human adversaries, and other interdisciplinary challenges. This lecture will provide an overview of my research's group's work in this area, outlining key algorithmic principles, research results, as well as a discussion of our deployed systems and lessons learned. | |||||

2012-08-17 | 12:00-01:00 | D344 LSRC, Duke | Markus Brill | Computing Set-Valued Solution Concepts in Social Choice and Game Theory | info |

Computing Set-Valued Solution Concepts in Social Choice and Game Theory In this talk, I will present recent results on the computational complexity of solution concepts from both game theory and social choice theory. I will first focus on solution concepts for normal-form games that are based on varying notions of dominance. These concepts are intuitively appealing and admit unique minimal solutions in important subclasses of games. Examples include Shapley's saddles, Harsanyi and Selten's primitive formations, Basu and Weibull's CURB sets, and Dutta and Laslier's minimal covering sets. I will outline two generic algorithms for computing these concept and investigate for which classes of games and which properties of the underlying dominance notion the algorithms are sound and efficient. I will then turn to social choice theory, where the complexity of the winner determination problem has been studied for almost all common voting rules. A notable exception, possibly caused by some confusion regarding its exact definition, is the method of ranked pairs. The original version of the method, due to Tideman, is irresolute and neutral. A variant introduced subsequently uses an exogenously given tie-breaking rule and therefore fails neutrality. The latter variant is the one most commonly studied in the area of computational social choice, and it is easy to see that its winner determination problem is computationally tractable. I will show that by contrast, computing the set of winners selected by Tideman's original ranked pairs method is NP-hard, thus revealing a trade-off between tractability and neutrality. Joint work with Felix Brandt and Felix Fischer. | |||||

2012-08-15 | 12:00-01:00 | D344 LSRC, Duke | Hossein Azari Soufiani | Random Utility for Social Choice | info |

Random Utility for Social Choice Random utility theory models an agents preferences on alternatives by drawing a real-valued score on each alternative (typically independently) from a parameterized distribution, and then ranking according to scores. A special case that has received significant attention is the Plackett-Luce model, for which fast inference methods for maximum likelihood estimators are available. We develop conditions on general, random utility models that enable fast inference within a Bayesian framework through MC-EM, providing unimodal log-likelihood functions. Results on both real-world and simulated data provide support for the scalability of the approach, despite its flexibility. | |||||

2012-03-30 | 12:00-01:15 | D344 LSRC, Duke | Hamid Nazerzadeh | Optimal Dynamic Mechanism Design and the Virtual Pivot Mechanism | info |

Optimal Dynamic Mechanism Design and the Virtual Pivot Mechanism We consider the problem of designing optimal mechanisms for settings where agents have dynamic private information. We present the Virtual-Pivot Mechanism, that is optimal in a large class of environments that satisfy a separability condition. The mechanism satisfies a rather strong equilibrium notion (it is periodic ex-post incentive compatible and individually rational). We provide both necessary and sufficient conditions for immediate incentive compatibility for mechanisms that satisfy periodic ex-post incentive compatibility in future periods. The result also yields a simple mechanism for selling a sequence of items to a single buyer. We also show the allocation rule of the Virtual-Pivot Mechanism has a very simple structure (a Virtual Index) in multi-armed bandit settings. This is based on joint work with Sham Kakade and Ilan Lobel. | |||||

2012-03-16 | 12:00-01:15 | D344 LSRC, Duke | Sebastien Lahaie | A Tractable Combinatorial Market Maker Using Constraint Generation | info |

A Tractable Combinatorial Market Maker Using Constraint Generation We present a new automated market maker that provides liquidity across multiple logically interrelated securities. Our approach lies somewhere between the industry standard---treating related securities as independent and thus not transmitting any information from one security to another---and a full combinatorial market maker for which pricing is computationally intractable. Our market maker, based on convex optimization and constraint generation, is tractable like independent securities yet propagates some information among related securities like a combinatorial market maker, resulting in more complete information aggregation. We prove several favorable properties of our scheme and evaluate its information aggregation performance on survey data involving hundreds of thousands of complex predictions about the 2008 U.S. presidential election. | |||||

2011-12-02 | 11:45-01:15 | D344 LSRC, Duke | Ben Lubin | Market Mechanisms for Data Center Resource Allocation and Their Pricing | info |

Market Mechanisms for Data Center Resource Allocation and Their Pricing As data-center energy consumption continues to rise, efficient power management is becoming increasingly important. In this talk, I will describe work that examines the use of a market mechanisms for finding the right balance between power and performance. The market enables a separation between a buyer side that strives to maximize performance and a seller side that strives to minimize power and other costs. A concise and scalable description language is defined for agent preferences that admits a mixed integer program for computing optimal allocations. A sophisticated proxy enables agents to bid on various service levels without having to reason about the allocation necessary to achieve a given level of performance. The system as defined can use any pricing rule to charge for the allocation that is chosen. In the latter part of the talk I will examine the desiderata for such possible rules, as applicable in a variety of combinatorial settings, and various methods for both constructing and comparing them. | |||||

2011-11-18 | 11:45-12:45 | D344 LSRC, Duke | Kevin Leyton-Brown | Beyond Equilibrium: Predicting Human Behavior in Normal Form Games | info |

Beyond Equilibrium: Predicting Human Behavior in Normal Form Games It is standard in multiagent settings to assume that agents will adopt Nash equilibrium strategies. However, studies in experimental economics demonstrate that Nash equilibrium is a poor description of human players initial behavior in normal-form games. In this paper, we consider a wide range of widely-studied models from behavioral game theory. For what we believe is the first time, we evaluate each of these models in a meta-analysis, taking as our data set large-scale and publicly-available experimental data from the literature. We then show how to analyze the parameters of the best-performing model, and propose modifications to it that we believe make it more suitable for practical prediction of initial play by humans in normal-form games. | |||||

2011-11-17 | 02:00-03:00 | D344 LSRC, Duke | Ian Kash | Economics of BitTorrent Communities | info |

Economics of BitTorrent Communities Over the years, private file-sharing communities built on the BitTorrent protocol have developed their own policies and mechanisms for motivating members to share content and contribute resources. By requiring members to maintain a minimum ratio between uploads and downloads, private communities effectively establish credit systems, and with them full-fledged economies. In this talk Ill report on a measurement study of DIME -- a community for sharing live concert recordings -- that sheds light on the economic forces affecting users in such communities. A key observation is that while the download of files is priced only according to the size of the file, the rate of return for seeding new files is significantly greater than for seeding old files. Ill describe a natural experiment that shows users react to such differences in resale value by preferentially consuming older files during a free leech period and discuss suggestions for possible improvement. Joint work with John K. Lai, Haoqi Zhang, and Aviv Zohar. | |||||

2011-11-11 | 02:00-03:00 | D344 LSRC, Duke | David Thompson | Uncertainty and Learning in Auctions | info |

Uncertainty and Learning in Auctions We study rational bidders in auctions, but relax the standard assumption that each bidder knows precisely what she would be willing to pay. In our model, this information has an associated cost representing the bidders' computational or cognitive limits. Thus a bidder does not just decide how to bid, she first has to decide how well informed to become. Initially, we studied this problem from the perspective of the bidders, identifying Nash equilibrium strategies for classical auction designs. More recently, we considered the perspective of the auctioneer, investigating the problem of auction design for uncertain bidders. | |||||

2011-11-11 | 11:45-12:45 | 2008 MBA Classroom, Fuqua School of Business, Duke | Preston McAfee | Economics and Machine Learning | info |

Economics and Machine Learning Machine learning is increasingly used in exchange environments, e.g. to predict click rates on internet advertisements and stock market trades. In such settings, economic mechanisms interact with ML. The talk focuses on internet advertising as an example. Four distinct interactions are considered. First, offline-trained ML exposed to online auctions fails to predict accurately. Second, incorporating machine learning sometimes requires not selecting the expected highest value item for learning purposes, which requires changing buyer payments to reflect ML actions. Third, changing the buyer payments imposes externalities on sellers; these externalities may be mitigated by using the mechanism as a budget breaker. Finally, multiple interacting ML systems naturally learn to depend on extraneous variables. (The talk refers to four papers.) | |||||

2011-11-04 | 11:45-12:45 | D344 LSRC, Duke | Rahul Sami | Sybilproof Transitive Trust Protocols | info |

Sybilproof Transitive Trust Protocols I will describe our recent results on protocols that enable potentially profitable but risky interactions between two parties (the principal and the agent), in the absence of a direct trust relationship or common currency between the two parties. In such situations, it is possible to safely enable interactions interactions mediated by a chain of credit or \"trust\" links. We introduce a model built on pairwise trust accounts that provides insight into a number of applications, including open currency systems, network trust aggregation systems, and manipulation-resistant recommender systems. We show that with indirect trust exchange protocols, some friction is unavoidable: Any protocol that satisfies a natural strategic safety property that we call sum-sybilproofness can sometimes lead to a reduction in expected overall trust balances even on interactions that are profitable in expectation. Thus, for long-term growth of trust accounts, which are assets enabling risky but valuable interactions, it may be necessary to limit the use of indirect trust. We present transitive trust protocols that achieves the optimal rate of expected growth in trust accounts, among all protocols satisfying the sum-sybilproofness condition. This talk covers joint work with Paul Resnick. | |||||

2011-10-28 | 11:45-12:45 | D344 LSRC, Duke | Sayan Bhattacharya | Approximation Algorithm for Security Games with Costly Resources | info |

Approximation Algorithm for Security Games with Costly Resources In recent years, algorithms for computing game-theoretic solutions have been developed for real-world security domains. These games are between a defender, who must allocate her resources to defend potential targets, and an attacker, who chooses a target to attack. Existing work has assumed the set of defenders resources to be ﬁxed. This assumption precludes the eﬀective use of approximation algorithms, since a slight change in the defenders allocation strategy can result in a massive change in her utility. In contrast, we consider a model where resources are obtained at a cost, initiating the study of the following optimization problem: Minimize the total cost of the purchased resources, given that every target has to be defended with at least a certain probability. We give an eﬃcient logarithmic approximation algorithm for this problem. Joint work with Vincent Conitzer and Kamesh Munagala. |

Comments to webmaster@cs.duke.edu | Report an error on this page | © Duke University Department of Computer Science 2016.