In this talk we will discuss a link between geometry of continued fractions and global relations for singularities of complex projective toric surfaces. The results are based on recent development of lattice trigonometric functions that are invariant with respect to Aff(2,Z)-group action.
Les nombres de Markov sont des entiers positifs qui apparaissent dans les triplets de solutions de l’équation diophantienne, x^2+y^2+z^2 = 3xyz, appelée l’équation de Markov. Il est possible de trouver tous les solutions à partir d’un triplet par un algorithme simple. Pourtant, il y a un célèbre problème ouvert formulé par Frobenius : est-il vrai qu'étant donné un entier positif z, il existe au plus un couple (x,y) d’entiers positifs avec x < y < z tel que (x,y,z) soit une solution? Ces nombres apparaissent dans le contexte des fractions continues et de l’approximation diophantienne des nombres réels irrationnels par des nombres rationnels. Ils apparaissent aussi dans de très nombreux domaines des mathématiques comme les formes quadratiques binaires, la géométrie hyperbolique et la combinatoire des mots etc... Le but de cette exposé est de présenter une partie de la théorie de Markov qui est construite autour de l’équation de Markov et de donner la conjecture d’unicité sur les nombres de Markov. Au final, on introduira une involution des irrationnels susceptible d’être pertinente pour le sujet.
Implicative algebras, developed by Alexandre Miquel, are very simple algebraic structures generalizing at the same time complete Boolean algebras and Krivine realizability algebras, in such a way that they allow to express in a same setting the theory of forcing (in the sense of Cohen) and the theory of classical realizability (in the sense of Krivine). Besides, they have the nice feature of providing a common framework for the interpretation both of types and programs. The main default of these structures is that they are deeply oriented towards the λ-calculus, and that they only allows to faithfully interpret languages in call-by-name. To remediate the situation, we introduce two variants of implicative algebras: disjunctive algebras, centered on the “par” (⅋) connective of linear logic (but in a non-linear framework) and naturally adapted to languages in call-by-name; and conjunctives algebras, centered on the “tensor” (⊗) connective of linear logic and adapted to languages in call-by-value. Amongst other properties, we will see that disjunctive algebras are particular cases of implicative algebras and that conjunctive algebras can be obtained from disjunctive algebras (by reversing the underlying order).
A Majority Automata consists of applying over the vertices of a undirected graph (with states 0’s and 1’s) an operator that chooses the most represented state among the neighbors of a vertex. This rule is applied in parallel over all the nodes of the graph. When the graph is a regular lattice ( in one or more dimensions) it is called the Majority Cellular Automata. In this seminar we will study the computational complexity of the following prediction problem: PRED: Given an initial configuration and a specific site initially at state a ( 0 or 1), is there a time step T≥1 such that this site changes state? The complexity of PRED is characterized by the possibility to find an algorithm that give the answer faster than the running of the automata simulation in a serial computer. More precisely, if we are able to determine an algorithm running in a parallel computer in polylog time (class NC). Otherwise, the problem may be P-Complete ( one of the most dificult in class P of Polynomial problems) or … worse. We will applied this kind of results to the discrete Schelling’s segregation model. Also we will present the Sakoda’s Social Discret model.
TBA
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data's behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations need are needed. Size-Change Principle'' from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into
real life'' programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications.
Soit c un nombre rationnel. Considérons le polynôme φ(X) = X^2 - c. On s’intéressse aux cycles de φ dans Q. Plus précisément, on s’intéresse à l’une des conjectures de Poonen selon laquelle tout cycle de φ dans Q admet une longueur au plus égale à 3. Dans notre exposé, on discutera de cette conjecture et on rappellera les résultats connus. En suite, on utilisera des moyens arithmetiques, combinatoriaux et analytiques simples pour étudier des cas particuliers de ce problème. Les outils utilisés dans cet exposé sont accessibles aux étudiants de master 2.
La logique linéaire différentielle (DiLL) a été construite après une étude de modèles vectoriels de la logique linéaire, où les preuves sont interprétées par des séries plus ou moins formelles. Il s'agit donc de modèles discrets, où la différentielle extrait la partie linéaire d'une série entière. On cherche à trouver un modèle continu de la logique linéaire différentielle classique : il nous faut à la fois une catégorie cartésienne close de fonctions lisses et une catégorie monoidale close d'espaces réfléxifs. Nous allons détailler une solution partielle à ce problème, à travers d'espaces nucléaires et d'espaces de distributions. Nous verrons comment ce modèle suggère une syntaxe séparée en classes de formules, chaque classe correspondant aux solutions d'une EDP linéaire. Nous montrerons que chaque classe liée à une EDP dont on peut construire la solution se comporte comme une exponentielle intermédiaire, et vérifie les règles exponentielles de la logique linéaire différentielle. Si le temps le permet, nous aborderons un travail en collaboration avec Y. Dabrowski , où nous trouvons plusieurs modèles lisses de la logique linéaire différentielle, en faisant le choix discriminant d'interpréter la disjonction multiplicative de LL par le produit epsilon de Schwartz.
In the first part of this talk, I'll recall the construction of category of games and innocent deterministic strategies introduced by Harmer, Hyland and Mellies [1]. Compared with the original method by Hyland and Ong [2], this method holds two specific advantages. First, it outlines the structural conditions on certain games and strategies by introducing separate entities (the schedules) that focus most of the required proof work. Second, the methods lays out a pretty clear combinatorial ‘recipe’ that could be replicated in other settings. That will be the goal of the second part of the talk, which will develop a 2-categorical and sheaf-theoretic formulation of non-deterministic innocent strategies, based on this ‘recipe’. During the course of this construction, I'll outline specific properties that give us a better understanding of both deterministic and non-deterministic strategies.
[1] Categorical combinatorics of innocent strategies, Harmer, Hyland, Mellies, LiCS 2007.
[2] On full abstraction for PCF I, II and III, Hyland, Ong, Information and Computation 2000.
The lambda-calculus possesses a strong notion of extensionality, called ``the omega-rule'', which has been the subject of many investigations. It is a longstanding open problem whether the equivalence obtained by closing the theory of Böhm trees under the omega-rule is strictly included in Morris's original observational theory, as conjectured by Sallé in the seventies. We will first show that Morris's theory satisfies the omega-rule. We will then demonstrate that the two aforementioned theories actually coincide, thus disproving Sallé's conjecture.
The proof technique we develop is general enough to provide as a byproduct a new characterization, based on bounded eta-expansions, of the least extensional equality between Böhm trees.
Herbrand's theorem, widely regarded as a cornerstone of proof theory, exposes some of the constructive content of classical logic. In its simplest form, it reduces the validity of a first-order purely existential formula to that of a finite disjunction. More generally, it gives a reduction of first-order validity to propositional validity, by understanding the structure of the assignment of first-order terms to existential quantifiers, and the causal dependency between quantifiers. In this paper, we show that Herbrand's theorem in its general form can be elegantly stated as a theorem in the framework of concurrent games. The causal structure of concurrent strategies, paired with annotations by first-order terms, is used to specify the dependency between quantifiers. Furthermore concurrent strategies can be composed, yielding a compositional proof of Herbrand's theorem, simply by interpreting classical sequent proofs in a well-chosen denotational model.
Game semantics is a class of denotational models for programming languages, in which types are interpreted as games and programs as strategies. In order to interpret pure programs, one has to restrict to innocent strategies. Two key results then entail that they are the morphisms of a category: associativity of composition and stability of innocence. These are non-trivial results, and significant work, notably by Melliès, has been devoted to better understanding them. Recently, games models have been extended to concurrent languages, using presheaves as generalised strategies and recasting innocence as a sheaf condition. We here revisit composition of strategies in concurrent game semantics with abstract categorical tools that make most of the reasoning automatic and extract the few crucial lemmas that give composition of strategies all its desired properties. The approach applies to non-concurrent strategies as well.
Je présenterai plusieurs preuves de normalisation du lambda-calcul simplement typé avec une majoration de la longueur d'une normalisation d'un terme typé donné. J'expliquerai rapidement comment le dernier résultat se généralise au lambda-mu-calcul simplement typé. L'exposé est accessible à tous les membres de l'équipe.
The Cook-Levin theorem (the statement that SAT is NP-complete) is a central result in structural complexity theory. What would a proof of the Cook-Levin theorem look like if the reference model of computation were not Turing machines but the lambda-calculus? We will see that exploring the alternative universe of ``structural complexity with lambda-terms instead of Turing machines'' brings up a series of interesting questions, both technical and philosophical. Some of these have satisfactory answers, leading us in particular to a proof of the Cook-Levin theorem using tools of proof-theoretic and semantic origin (linear logic, Scott continuity), but many others are fully open, leaving us wondering about the interactions (or lack thereof) between complexity theory and proof theory/programming languages theory.
Two polynomials f and g are said to be algebraically dependent over a field K if there exists a non-zero bivariate polynomial A with coefficients in K such that A(f,g)=0. If no such polynomial exists, we say that f and g are independent. This is a natural generalization of linear independence to higher degrees. We consider the problem of finding an algorithm to test whether the given set of polynomials are algebraically independent. When the field has characteristic zero, this problem has a randomised polynomial time algorithm using the Jacobian Matrix of the given polynomials. However the criterion fails when the polynomials are taken over the fields of positive characteristic. One can expect that the positive characteristic case also has an efficient algorithm for testing algebraic independence, however, the existing best known upper bound is very far from desired. The talk will cover a result which is an attempt to bridge this gap. We present a new algorithm which is based on a refined generalisation of Jacobian criterion in case of fields of positive characteristic. It also yields a structural result about the algebraically dependent polynomials - approximate functional dependence.
S'il est commun, dans la communauté de réalisabilité, de considérer un type comme l'ensemble de ses preuves, et donc un ensemble de termes, il est aussi commun, dans la communauté des types intersections, de considérer un terme comme l'ensemble de ses comportements possibles, et donc un ensemble de types. Dans cet exposé, je présenterai en détail les types intersections, qui sont moins connus au sein du LAMA. Puis j'essaierai d'expliciter les liens avec la réalisabilité: en quoi il est intéressant de composer les deux, et si l'on peut voir les deux opérations comme duales dans un certain sens. Il ne s'agira pas (ou peu) de travaux personnels, mais d'une présentation générale et transversale du domaine.