TBA
La théorie des types observationnelle (OTT) est une extension de la théorie des types dépendants conçue par Altenkirch et McBride dans les années 2000. L'idée centrale d'OTT est qu'en modifiant le concept d'égalité, il devient possible d'avoir des types quotients et des fonctions extensionnelles sans compromettre le caractère constructif de la théorie des types dépendants.
Lors de cet exposé, j'expliquerai comment combiner la théorie observationnelle d'Altenkirch et McBride avec le Calcul des Constructions, la théorie des types qui sous-tend l'assistant à la preuve Coq. En particulier, j'expliquerai comment caractériser l'égalité des types inductifs de Coq, et comment les éliminateurs des types indexés utilisent une règle de calcul délicate à intégrer à la théorie.
Si le temps le permet, j'en profiterai pour faire une démonstration de la branche expérimentale Coq-observationnel, et pour discuter d'avancées récentes dans la sémantique ensembliste de la théorie des types observationnelle.
We consider Maker-Breaker domination games, a variety of positional games, in which two players (Dominator and Staller) alternately claim vertices of a given graph. Dominator's goal is to fully claim all vertices of a dominating set, while Staller tries to prevent Dominator from doing so, or at least tries to delay Dominator's win for as long as possible.
We prove a variety of results about domination games, including the number of turns Dominator needs to win and the size of a smallest dominating set that Dominator can occupy. We could also show that speed and size can be far apart, and we prove further non-intuitive statements about the general behaviour of such games.
We also consider the Waiter-Client version of such games.
Co-authors (all from Hamburg University of Technology as well): Ali Deniz Bagdas, Dennis Clemens, Fabian Hamann
La théorie des types dépendants sert de fondations à une famille d'assistants à la preuve dont Agda, Coq et Lean sont trois représentants modernes. La correction de ces implémentations reposent sur des propriétés métathéoriques des systèmes de types dépendants telle que l'existence de formes canonique, la décidabilité de la conversion ou du typage, dont les preuves sont notoirement complexes. Dans cette présentation, j'expliquerai les difficultés qui se présentent pour méchaniser de telles preuves de propriété métathéorique ainsi que les solutions retenues pour établir formellement des résultats de normalisation et de décidabilité de la théorie des types de Martin-Löf dans l'assistant de preuves Coq. J'esquisserai comment ce projet donne aussi l'opportunité de développer des extensions expressives des assistants à la preuve tout en préservant des fondations solides.
Skeletal Semantics is a meta-language to describe the semantics of programming languages. We present it through several examples, highlighting how complex features can be captured in a readable way using monads. These features range from simple effects like exceptions to more complex ones like generators.
In this talk I will introduce local certification, a notion originating from distributed computing that sheds a new light on the structure of graphs. After giving intuitions about the notion, I will review the recent developments, and make connections with other areas of theoretical computer science (including complexity and logic).
L'exposé portera sur les calculs de flows maximaux dans les graphes avec l'algorithme de Ford-Fulkerson, puis de l'utilisation de ces flows pour la segmentation d'image. Je parlerais aussi des différentes implémentations de ces algorithmes pour améliorer leur efficacité ou modifier leur comportement par rapport aux images choisies. Je terminerai par quelques exemples concrets de certaines implémentations que j'aurais pu essayer et conclurais.
Many categorical models of linear logic with fixed points arise as total categories over the category Rel of sets and relations. They are form ∫Q, the Grothendieck category for a functor Q : Rel -> Pos. We will define the concepts of fixed points and Grothendieck category and then we give a result to lift functor from the base category to the total category and studies also how to lift fixed points. In particular, the category of coalgebras for the lifted functor is a total category, and when Q factors through SLatt, the category of posets with joins and maps that preserve them, we found the same result.
Graph embedding approaches attempt to project graphs into geometric entities, {\em i.e.}, manifolds. At its core, the idea is that the geometric properties of the projected manifolds are helpful in the inference of graph properties. However, the choice of the embedding manifold is critical and, if incorrectly performed, can lead to misleading interpretations due to incorrect geometric inference. We argue that the classical embedding techniques cannot lead to correct geometric interpretation as the microscopic details, {\em e.g.}, curvature at each point, of manifold, that are needed to derive geometric properties in Riemannian geometry methods are not available, and we cannot evaluate the impact of the underlying space on geometric properties of shapes that lie on them. We advocate that for doing correct geometric interpretation the embedding of a graph should be done over regular constant curvature manifolds. To this end, we present an embedding approach, the discrete Ricci flow graph embedding (dRfge) based on the discrete Ricci flow that adapts the distance between nodes in a graph so that the graph can be embedded onto a constant curvature manifold that is homogeneous and isotropic, {\em i.e.}, all directions are equivalent and distances comparable, resulting in correct geometric interpretations. A major contribution of this paper is that for the first time, we prove the convergence of discrete Ricci flow to a constant curvature and stable distance metric over the edges. A drawback of using the discrete Ricci flow is the high computational complexity that prevented its usage in large-scale graph analysis. Another contribution of our work is a new algorithmic solution that makes it feasible to calculate the Ricci flow for graphs of up to 50k nodes, and beyond. The intuitions behind the discrete Ricci flow make it possible to obtain new insights into the structure of large-scale graphs.
Implicative algebras, recently discovered by Miquel, are combinatorial structures unifying classical and intuitionistic realizability as well as forcing. In this talk we introduce implicative assemblies as sets valued in the separator of an underlying implicative algebra. Given a fixed implicative algebra A, implicative assemblies over A organise themselves in a category AsmA with tracked set-theoretical functions as morphisms. We show that AsmA is a quasitopos with a natural numbers object (NNO).
Comment un résultat de catégories a inspiré une solution aux problèmes de performance de Darcs. Darcs est un système de contrôle de versions sorti en 2005, basé sur des patchs, ce qui le rendait extrêmement simple à utiliser et particulièrement rigoureux, en particulier dans sa gestion des conflits. Seul problème, il prenait parfois un temps exponentiel en la taille de l'histoire pour son opération la plus courante (appliquer des patchs). Je vous expliquerai comment nous avons résolu le problème, en utilisant des catégories et des structures de données purement fonctionnelles, pour concevoir un algo en temps logarithmique pour tous les cas (sauf un cas dégénéré en temps linéaire).
Le résultat est un système déterministe (ce qui le distingue de tous les autres outils de contrôle de versions), facile à apprendre et à utiliser, tout en passant à des échelles qu'aucun autre système de contrôle de versions distribué ne peut atteindre.
In a first part I'll will broadly cover the topic of what the Laplace- Beltrami is, its different characterization and its many uses in computer graphics.
Then I'll cover our works on the topic of building this operator (along with a wider digital calculus framework) on digital surfaces (boundary of voxels). These surfaces cannot benefit directly from the classical mesh calculus frameworks. In our recent work, we propose two new calculus frameworks dedicated to digital surfaces, which exploit a corrected normal field. First we build a corrected interpolated calculus by defining inner products with position and normal interpolation in the Grassmannian. Second we present a corrected finite element method which adapts the standard Finite Element Method with a corrected metric per element. Experiments show that these digital calculus frameworks seem to converge toward the continuous calculus, offer a valid alternative to classical mesh calculus, and induce effective tools for digital surface processing tasks.
Normal form bisimilarities are a natural form of program equivalence resting on open terms, first introduced by Sangiorgi in call-by-name. The literature contains a normal form bisimilarity for Plotkin’s call-by-value 𝜆-calculus, Lassen’s enf bisimilarity, which validates all of Moggi’s monadic laws and can be extended to validate 𝜂. It does not validate, however, other relevant principles, such as the identification of meaningless terms—validated instead by Sangiorgi’s bisimilarity—or the commutation of lets. These shortcomings are due to issues with open terms of Plotkin’s calculus. We introduce a new call-by-value normal form bisimilarity, deemed net bisimilarity, closer in spirit to Sangiorgi’s and satisfying the additional principles. We develop it on top of an existing formalism designed for dealing with open terms in call-by-value. It turns out that enf and net bisimilarities are incomparable, as net bisimilarity does not validate Moggi’s laws nor 𝜂. Moreover, there is no easy way to merge them. To better understand the situation, we provide an analysis of the rich range of possible call-by-value normal form bisimilarities, relating them to Ehrhard’s relational model.
We give a characterization of star-free languages (a well-known subclass of regular languages) in a λ-calculus with support for non-commutative affine types (in the sense of linear logic), via the algebraic characterization of the former using aperiodic monoids. This was the first result in a research program that Cécilia Pradic (Swansea University) and I have carried out since my PhD, on which I will say a few words.
Type theory is a family of formal systems ranging from programming language semantics to the foundations of mathematics. In practice, type theories are defined by means of “inference rules”. Everyone in the community understands them to some extent, but they do not have any commonly accepted rigorous interpretation. Or, rather, they have several interpretations, none of which is entirely satisfactory.
In this work, after a brief overview of the literature, we introduce a rigorous, semantic notion of inference rule, our thesis being that most syntactic inference rules written in practice may be directly interpreted in this framework. If time permits, we will sketch how this covers quantitative type theories.
This is joint work in progress with André Hirschowitz and Ambroise Lafont.
I describe a P-time heuristic to reconstruct a subclass of degree sequences of 3-uniform hypergraphs. The heuristic bases on some geometrical properties of the involved hypergraphs and also produces a small set of ambiguous hyperedges that has to be individually considered.
We consider the problem of detecting and reconstructing degree sequences of 3-uniform hypergraphs. The problem is known to be NP-hard, so some subclasses are inspected in order to detect instances that admit a P-time reconstruction algorithm. Here, I consider instances having specific numerical patterns and I show how to easily reconstruct them.
We initiate the study of the algorithmic complexity of Maker-Breaker games played on edge sets of graphs for general graphs. We mainly consider three of the big four such games: the connectivity game, perfect matching game, and H-game. Maker wins if she claims the edges of a spanning tree in the first, a perfect matching in the second, and a copy of a fixed graph H in the third. We prove that deciding who wins the perfect matching game and the H-game is PSPACE-complete, even for the latter in graphs of small diameter if H is a tree. Seeking to find the smallest graph H such that the H-game is PSPACE-complete, we also prove that there exists such an H of order 51.
On the positive side, we show that the connectivity game and arboricity-k game are polynomial-time solvable. We then give several positive results for the H-game, first giving a structural characterization for Breaker to win the P4-game, which gives a linear-time algorithm for the P4-game. We provide a structural characterization for Maker to win the K_{1,l}-game in trees, which implies a linear-time algorithm for the K_{1,l}-game in trees. Lastly, we prove that the K_{1,l}-game in any graph, and the H-game in trees are both FPT parameterized by the length of the game. We leave the complexity of the last of the big four games, the Hamiltonicity game, as an open question.
ML modules offer large-scale notions of composition and modularity. Provided as an additional layer on top of the core language, they have proven both vital to the working OCaml and SML programmers, and inspiring to other use-cases and languages. Unfortunately, their meta-theory remains difficult to comprehend, and more recent extensions (abstract signatures, module aliases) lack a complete formalization. Building on a previous translation from ML modules to Fω, we propose a new comprehensive description of a significant subset of OCaml modules, including both applicative and generative functors and transparent ascription -- a useful new feature. By exploring the translations both to and from Fω, we provide a complete description of the signature avoidance issue, as well as insights on the limitations and benefits of the path-based approach of OCaml type-sharing.