Networking Seminar
Fall 2005
Talks:

Michal Feldman 


Shay Kutten 





Yaron De Levie 





Boaz PattShamir 
Trust, Collaboration and Recommendations in PeertoPeer Systems 

Nir Halachmi 
Protecting Bursty Applications
Against Traffic Aggressiveness 

Shmuel Friedland 

Jan
5, 2006 
Eyal EvenDar 

Jan
12, 2006 
 

Jan
19, 2006 
Zvi Lotker 
Publish and Perish: Definition and Analysis of an
$n$Person Publication Impact Game 
Jan
26, 2006 
Allon Shafrir 
Approximate Topk Queries in Sensor Networks 
Feb 3, 2006 
Shie Mannor 
Asymptotics of Efficiency Loss in
Competitive Market Mechanisms 
Michal Feldman, TAU and HU
Title:
HiddenAction in MultiHop Routing
Abstract:
In multihop networks, the actions taken by individual intermediate
nodes are typically hidden from the communicating endpoints; all the
endpoints can observe is whether or not the endtoend transmission
was successful. Therefore, in the absence of incentives to the
contrary, rational (i.e., selfish) intermediate nodes may choose to
forward packets at a low priority or simply not forward packets at
all. Using a principalagent model, we show how the hiddenaction
problem can be overcome through appropriate design of contracts, in
both the direct (the endpoints contract with each individual router)
and recursive (each router contracts with the next downstream router)
cases. We further demonstrate that perhop monitoring does not
necessarily improve the utility of the principal or the social welfare
in the system. In addition, we generalize existing mechanisms that
deal with hiddeninformation to handle scenarios involving both
hiddeninformation and hiddenaction.
Title: Locality of Fault Tolerance
Abstract:
Traditional fault tolerance methods are global in nature. Some involve
resetting all
the network nodes. Others use timeouts that imply waiting to the slowest and
furthest node, etc.
Local detection of faults is based on the following observation: even
computations that cannot be performed locally, can be
verified locally.
Hence, it is possible to verify locally (cooperating only with neighbors)
global correctness predicates. If the predicate does not hold, then measures
for correctness can be invoked.
The cost of correction can also be, sometimes, local to the faults, in the
sense that when the extent of the faults is lower, the cost of the recovery is
lower too.
The talk will mention results from several papers by several authors, including
a very recent paper.
Title: Space and Step Complexity Efficient Adaptive Collect
Abstract: Space and step complexity efficient deterministic adaptive
to total contention collect algorithms are presented. One of them has an
optimal O(k) step and O(n) space complexities, but restrict
the processes
identifiers size to O(n). Where n is the total number of processes in the
system and k is the total contention, the total number of processes active
during the execution. Unrestricting the name space increases the space
complexity to O(n^2) leaving the step complexity at
O(k). To date all
deterministic adaptive collect algorithms that we are aware of are either
nearly quadratic in their step complexity or their memory overhead is
exponential in n.
Speaker: Boaz PattShamir, TAU
Title: Trust, Collaboration and Recommendations in PeertoPeer Systems
Abstract:
In peertopeer (p2p) systems, work is distributed among the
participating peers, unlike the classical clientserver architecture,
where work is done by a central server. One of the fundamental
problems facing p2p systems is that different peers may have different
interests: in extreme cases, some peers may even wish to destroy the
usability of the system. It is therefore necessary to develop a
(possibly implicit) notion of trust, so as to allow peers to continue
functioning in the face of diverse, potentially hostile peers. In this
talk I will describe a line of recent research which studies the
algorithmic aspect of concepts such as "trust," "collaborative
filtering" and "recommendation systems."
Based on various papers with Alon, Awerbuch, Azar, Lotker, Peleg and Tuttle.
Title: Protecting Bursty Applications Against Traffic Aggressiveness
Speaker: Nir Halachmi (IDC)
Time: Thurday, 22/12. 10:30
Location: Shreiber 309
Abstract
Aggressive use of networks, in particular the Internet, either by malicious or innocent users, threats the service availability and quality of polite applications. Common queueing mechanisms which supposedly solve the problem, are shown in this work to be ineffective for bursty applications including Web applications. This can be exploited by malicious users to conduct a new kind of denial of service attacks.
We propose a new traffic control mechanism called {\it Aggressiveness
Protective Queueing (APQ)} which is based on
attributing importance weights to the users and which solves this problem by
dynamically decreasing the weight of the aggressive users. The actual weight
used for a flow is a dynamically varying parameter reflecting the past bandwidth
usage of the flow. We show that under heavy load (deterministic model), {\it APQ} significantly restricts the amount of traffic an
aggressive user can send and bounds it to at most {\it twice} the amount of
traffic sent by a polite (regular) user. Simulation results
demonstrate the effectiveness of APQ under a stochastic environment.
Joint work with Anat BremlerBarr (IDC) and Hanoch Levy (TAU)
Speaker: Shmuel Friedland
Title: The Role of Singular Value Decomposition in Data Analysis
Abstract:
The singular value decomposition (SVD) of an $m\times n$ matrix
emerged as a very important tool in data analysis, data
compression and data recovery in computer science, electrical
engineering and biology.
In this talk we will explain the significance of SVD and some
recent applications to the following topics:
1. Fast low rank approximation of matrices using MonteCarlo techniques.
2. A joint SVD decomposition of two or more
matrices to compare several biological processes.
3.
Estimation of missing values in given matrix data using the inverse eigenvalue problems techniques, and their applications to to DNA microarrays and image
processing.
Most of the results can be found the following recent papers, which are available
at http://www.math.uic.edu/~friedlan/research.html
Routing Without Regret
There has been substantial work developing simple, efficient
``noregret'' algorithms for a number of adaptive gameplaying
problems including online routing. There has also been
substantial work on analyzing properties of Nash equilibria
in
routing games. In this paper, we consider the question: if each
player in a routing game uses a noregret strategy, will behavior
converge to a Nash equilibrium, and under what
conditions and in
what sense? Note that in general games the answer is known to be
{\em no} in a strong sense, but routing games
have substantially
more structure.
In this paper we give a number of positive results. Our main result is
that in the setting of multicommodity flow and
infinitesimal agents, the
timeaverage flow is an $\epsilon$Nash
equilibriuma flow $f$ whose
cost is within $\epsilon$ of the cheapest paths possible given $f$for
$\epsilon$ approaching 0 at a rate that depends polynomially
on the
players' regret bounds and the maximum slope of any latency
function. Moreover, we show the dependence on slope is necessary. We
also give stronger results for special cases such as the case of $n$
parallel links, and also consider the finitesize (noninfinitesimal)
loadbalancing model of Azar \cite{AZ98}. Our motivations are similar to
the recent work of Fischer and V\"{o}cking \cite{FV04} and we discuss
relations to their results as well.
Joint work with A. Blum and K. Ligett
Speaker: Zvi Lotker, CWI
Title : Publish and Perish: Definition and Analysis
of an $n$Person Publication Impact Game
We consider the following abstraction of competing publications.
There are $n$ players vying for the attention of the audience. The
attention of the audience is abstracted by a single slot which
holds, at any given time, the name of the latest release. Each
player needs to choose, ahead of time, when to release its product,
and the goal is to maximize the amount of time his product is the
latest release. Formally, each player $i$
chooses a point
$x_i\in[0,1]$, and his
payoff is the distance from its point $x_i$
to the next larger point, or to $1$ if $x_i$ is the
largest. For
this game, we give a complete characterization of the Nash
equilibrium for the twoplayer, continuousaction game, and, more
important, we give an efficient algorithm to compute the symmetric
Nash equilibrium for the $n$player, discreteaction game. In both
cases, we show that the (symmetric) equilibrium is unique. Our
algorithmic approach to the $n$player game is nonstandard in that
it does not involve solving a system of differential equations. We
believe that our techniques can be useful in the analysis of other
timing games.
This is join work with Boaz PattShamir,
Mark R. Tuttle
Title: Approximate Topk Queries in Sensor Networks
Abstract:
We consider a distributed system where each node keeps a local count for items (similar to elections where nodes are ballots and items are candidates). A topk query in such a system asks which are the k items whose global count, across all nodes in the system, is the largest. In this paper we present a MonteCarlo algorithm that outputs, with high probability, a set of k candidates which approximates the topk items. The algorithm is motivated by sensor networks in that it focuses on reducing the communication complexity. In contrast to previous algorithms, the communication complexity depends only on the global scores and not on the number of nodes or the partition of scores among nodes. If the number of nodes is large, our algorithm can dramatically reduce the communication complexity when compared to deterministic algorithms. The complexity of our algorithm is close to a lower bound on the cellprobe complexity of any noninteractive topk approximation algorithm. If the global scores are relatively rapidly decreasing (such as the Geometric or Zipf distributions), our algorithm achieves polylog communication complexity per node.
Joint work with Boaz PattShamir.
Title: Asymptotics of Efficiency Loss in Competitive Market
Mechanisms
Abstract
Decentralized control
mechanisms for networks have the objective of
maximizing social welfare in the face of heterogeneous demand and lack
of coordination among agents. In this talk we consider a probabilistic
setup, where the agents are assumed to be sampled from a given population.
We consider the efficiency loss, in terms of social welfare, between the
best centralized resource allocation mechanism and a simple decentralized
one introduced by Kelly. An algebraic bound on this efficiency loss exists
in the most general setup. We present asymptotic results concerning the
convergence of the loss of efficiency under the random sampling model.
In particular, we show that the loss of efficiency tends to zero
with high probability under some standard assumptions. Moreover, we derive
finite sample bounds for the efficiency loss as a function of the number of
users.
If, however, the assumption of bounded utility functions is relaxed, the loss
of
efficiency no longer converges to zero in a strong probabilistic sense.
These results are further extended to random networks where we analyze the
asymptotic behavior under a similar allocation mechanism.
Collaborating simulations are also presented.