bctcs2019.webspace.durham.ac.uk Open in urlscan Pro
141.193.213.21  Public Scan

URL: https://bctcs2019.webspace.durham.ac.uk/
Submission Tags: phishingrod
Submission: On April 06 via api from DE — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://bctcs2019.webspace.durham.ac.uk

<form class="c-search" role="search" method="get" data-js="site-search__form" action="https://bctcs2019.webspace.durham.ac.uk">
  <label class="c-search__label form-control-label" for="sf-66115bcfd948d"> Search </label>
  <span class="icon icon-search" aria-hidden="true"></span>
  <input class="c-search__input form-control" type="text" id="sf-66115bcfd948d" name="s" placeholder="Search" data-js="site-search__input">
  <button class="c-search__button c-search__reset" name="reset" data-js="site-search__reset" type="reset" aria-label="Clear">
    <span class="icon icon-close"></span></button>
  <button class="c-search__button c-search__submit" data-js="site-search__submit" type="submit" aria-label="Submit">
    <span class="c-btn__text">Submit</span></button>
</form>

Text Content

Skip to main content
BCTCS ALGOUK 2019

Search Submit
 1. ABOUT
 2. SPEAKERS
 3. PROGRAMME
 4. SUBMISSIONS
 5. DATES
 6. SPONSORS
 7. LOCATION
 8. PAST CONFERENCES




HOME


BCTCS & ALGOUK 2019

35th British Colloquium for
Theoretical Computer Science
15th – 17th April 2019, DURHAM

The British Colloquium for Theoretical Computer Science (BCTCS) is an annual
event for UK-based researchers in theoretical computer science. A central aspect
of BCTCS is the training of PhD students, providing an environment for students
to gain experience in presenting their work, to broaden their outlook on the
subject, and to benefit from contact with established researchers. The scope of
the colloquium includes all aspects of theoretical computer science, including
automata theory, algorithms, complexity theory, semantics, formal methods,
concurrency, types, languages and logics.

BCTCS 2019 is the 35th conference in the series and is being held together with
a workshop organized by AlgoUK, a network for the UK’s algorithms and complexity
research community.

Poster: please download and display our poster.

A Programme is now available.


INVITED BCTCS AND ALGOUK SPEAKERS


ANDY ADAMATZKY

UWE

Read Bio


ULRICH BERGER

SWANSEA

Read Bio


MARIA CHUDNOVSKY

PRINCETON

Read Bio


RICHARD JOZSA

CAMBRIDGE

Read Bio


VIVIEN KENDON

DURHAM

Read Bio


KITTY MEEKS

GLASGOW

Read Bio


BAHAR RASTEGARI

SOUTHAMPTON

Read Bio


BERNHARD VON STENGEL

LSE

Read Bio


SUSAN STEPNEY

YORK

Read Bio


JAMES WORRELL

OXFORD

Read Bio


PROGRAMME

All talks, refreshments and lunches are in the Department of Computer in the
connected Christopher and Higginson Buildings. Plenary and Track B talks are in
E005 which is at the front of the Christopher Building (the building faces
north), next to the main entrance to the department. A reception desk will be
located outside the lecture theatre from 12pm on Monday 15 April. Track A talks
are in E240 in the Higginson Building towards the back of the department.
Lunches and refreshments are in the Atrium adjacent to E240. See
the travel information for advice and maps to help you find us.


MONDAY 15 APRIL


ALGOUK SESSION ON NON-STANDARD COMPUTING (E005)




13:30 ANDREW ADAMATZKY: LIQUID COMPUTERS

We show that a liquid is a universal computing substrate. A liquid can be used
to represent signals, actuate mechanical computing devices and to modify signals
via chemical reactions. A liquid per se can host thousands of chemical
microprocessors. We give an overview of liquid based computing devices developed
over hundreds of years. In these devices liquid is used in various roles. Mass
transfer analogies are employed in hydraulic mathematical machines and
integrators. Flows of liquid explore and map templates, and solve mazes, in
liquid mappers. Interacting liquid jets realise logical gates in fluidic logic
devices, where Boolean values are represented by a flow pressure in the
channels. The liquid can be discretised in droplets and liquid marbles and
computation is implemented via marbles/droplets colliding with each other or
actuating mechanical devices. We give examples of liquid reaction-diffusion
chemical computers, where oxidation wave fronts or diffusing fronts of
precipitation solve logical and geometrical problems. We will also present
advances in liquid electronics, liquid switches and transistors. If time allows
we will talk about colloidal computers, including sol, emulsion and foam.


14:15 VIVIEN KENDON: CONTINUOUS-TIME QUANTUM COMPUTING

I will present a generalising and unifying view of quantum computing with
continuous-time quantum walks, adiabatic quantum computing, quantum annealing,
and a range of special purpose quantum simulators. The key feature they all
share is evolving the initial quantum state to the final quantum state using a
continuous-time process. Looking more closely, the similarities go further, with
the same Hamiltonians used to encode the problems across adiabatic quantum
computing, quantum annealing, and quantum walks for implementing the same
algorithm. The differences are in the time-dependence of the Hamiltonian, and
how long it is applied for. This invites the question of which is best,
especially in a practical setting where resources such as quantum coherence are
severely limited. The shorter run times of quantum walks and quantum annealing
may be favoured over slower adiabatic strategies, and hybrid algorithms
exploiting the strengths of each strategy allow the best performance to be
extracted from real hardware (Morley et al. (2017). Quantum search with hybrid
adiabatic-quantum walk algorithms and realistic noise. arXiv:1709.00371. To
appear in Phys. Rev. A.)

This unified view also has implications for the design of quantum computing
hardware, which can potentially be used to implement any or all types of
continuous time evolution, as required for the particular problem to be solved.
We therefore identify continuous-time quantum computing (CTQC) as a unifying
model of quantum computation with a continuous-time evolution applied to
discrete data. It consists of a register of qudits – with qubits as a special
case – to which Hamiltonian time evolution is applied that can vary continuously
with time, along with coupling to an engineered environment that contributes
decoherence, energy exchange and other open quantum system effects. As with the
circuit model, the output can be the quantum state in the register at the end of
the process, or the classical outcomes of measurements in the computational
basis on the quantum register. This contrasts with the circuit model, in which
discrete unitary quantum gates are applied to the register, with error
correction to ensure the computation remains coherent throughout.

Continuous-time quantum computing is complementary to the circuit model, and it
provides a computational paradigm for quantum computing in which techniques can
be adopted across what were previously seen as separate approaches. In
particular, it suggests that hardware platforms suitable for one should be
suitable for all, thus providing a focus for hardware design and development of
continuous-time quantum computers, alongside digital quantum computers.


15:00 COFFEE




15:30 SUSAN STEPNEY: UNCONVENTIONAL DESIGN OF UNCONVENTIONAL COMPUTERS

There are many proposed unconventional computational models: reservoir
computing, general purpose analogue computing, membrane computing, quantum
computing, morphogenetic computing, and more. There are just as many proposed
unconventional computing substrates: slime moulds, carbon nanotubes, gene
engineered bacteria, gold nanoparticle networks, memristors, optical relays, and
more. But how to match model and substrate to get an effective unconventional
computer?

We have developed CHARC, a framework to characterise how well some material
substrate can instantiate and implement a given computational model. We have
applied this to a reservoir computing model instantiated in carbon nanotubes,
and also in simulations. This works by comparing candidate instantiations with a
baseline system, and using novelty search over computational behaviour space to
quantify the achievable scope of the instantiation.

I will describe this framework, and how we are developing it further: to
characterise how faithfully some abstract model represents the computing
properties of a given substrate; and to co-design unconventional models and
substrates that are natural computational partners.


16:15 RICHARD JOZSA: CLIFFORD OPERATIONS AND THE VERIFICATION OF QUANTUM
COMPUTATIONS

Quantum computation has a remarkably rich diversity of models, each offering
different perspectives on its relation to classical computing, and also on
implementational challenges. We will introduce and discuss the model of quantum
computing based on Clifford gates and so-called magic states.

In this model, efficiently classically simulatable quantum processes are
elevated to full universal quantum computing power by inclusion of an additional
resource that can be argued to be entirely classical. We will also outline how
this feature suggests a novel and simple approach to the issue of verifying the
correctness of a quantum computer that is operating in the high complexity
regime, beyond the remit of any foreseeable classical computing power.


17:00 RECEPTION




TUESDAY 16 APRIL


ALGOUK SESSION ON ALGORITHMICS (E005)




09:30 JAMES WORRELL: SYNTHESISING POLYNOMIAL PROGRAM INVARIANTS

Automatically generating invariants is a fundamental challenge in program
analysis and verification. In this talk we give a select overview of previous
work on this problem, starting with Michael Karr’s 1976 algorithm for computing
affine invariants in affine programs, i.e., programs having only
non-deterministic (as opposed to conditional) branching and all of whose
assignments are given by affine expressions. We then present the main result of
the talk – an algorithm to compute all polynomial invariants that hold at each
location of a given affine program. Our main tool is an algebraic result of
independent interest: given a finite set of rational square matrices of the same
dimension, we show how to compute the Zariski closure of the semigroup that they
generate.

The talk is based on joint work with Ehud Hrushovski, Joël Ouaknine, and Amaury
Pouly,


10:15 BAHAR RASTEGARI: STABLE MARRIAGE WITH GROUPS OF SIMILAR AGENTS

Matching problems occur in various applications and scenarios such as the
assignment of children to schools and junior doctors to hospitals. In such
problems, it is often understood the participants (which we will refer to as
agents) have ordinal preferences over a subsets of agents. It is also widely
accepted that a “good” solution must be stable. Intuitively speaking, a stable
solution guarantees that no subset of agents find it in their best interest to
leave the prescribed solution and seek an assignment amongst themselves.

Many important stable matching problems are known to be NP-hard, even when
strong restrictions are placed on the input. In this talk, I’ll identify
structural properties of instances of stable matching problems which will allow
us to design efficient algorithms using elementary techniques. I focus on the
setting in which all agents involved in some matching problem can be partitioned
into k different types, where the type of an agent determines his or her
preferences, and agents have preferences over types (which may be refined by
more detailed preferences within a single type). I also consider a
generalisation in which each agent may consider some small collection of other
agents to be exceptional, and rank these in a way that is not consistent with
their types.

I will show that (for the case without exceptions), the well-known NP-hard
matching problem MAX SMTI (that of finding the maximum cardinality stable
matching in an instance of stable marriage with ties and incomplete lists)
belongs to the parameterised complexity class FPT when parameterised by the
number of different types of agents needed to describe the instance. I will
demonstrate how this tractability result can be extended to the setting in which
each agent promotes at most one “exceptional” candidate to the top of his/her
list (when preferences within types are not refined), but that the problem
remains NP-hard if preference lists can contain two or more exceptions and the
exceptional candidates can be placed anywhere in the preference lists.

The talk is based on joint work with: Kitty Meeks


11:00 COFFEE




11:30 KITTY MEEKS: FROM DECISION TO (APPROXIMATE) COUNTING

Decision problems – those in which the goal is to return an answer of “YES” or
“NO” – are at the heart of the theory of computational complexity, and remain
the most studied problems in the theoretical algorithms community. However, in
many real-world applications it is not enough just to decide whether a the
problem under consideration admits a solution: we often want to find all
solutions, or at least count (either exactly or approximately) their total
number. It is clear that finding or counting all solutions is at least as
computationally difficult as deciding whether there exists a single, and indeed
in many cases it is strictly harder (assuming P ≠ NP) even to count
approximately the number of solutions than it is to decide whether there exists
at least one.

In this talk I will discuss a restricted family of problems, in which we are
interested in solutions of a specified size: for example, solutions could be
copies of a given k-vertex graph H in a large host graph G, or more generally
k-vertex subgraphs of G that have some specified property (e.g. k-vertex
subgraphs that are connected). In this setting, although exact counting is
strictly harder than decision (assuming standard assumptions in paramterised
complexity), the methods typically used to separate approximate counting from
decision break down. Indeed, I will demonstrate a method that, subject to
certain additional assumptions, allows us to transform an efficient decision
algorithm for a problem of this form into an approximate counting algorithm,
with only a small increase in running time.


12:15 BERNHARD VON STENGEL: FAST ALGORITHMS FOR RANK-1 BIMATRIX GAMES

The rank of a bimatrix game is the matrix rank of the sum of the two payoff
matrices. For a game of rank k, the set of its Nash equilibria is the
intersection of a one-dimensional set of equilibria of parameterized games of
rank k−1 with a hyperplane. For a game of rank 1, this is a monotonic path of
solutions to a parameterized zero-sum game and hence LP. One intersection of
this path with the hyperplane can be found in polynomial time by binary search.
This gives a polynomial-time algorithm to find an equilibrium of a game of rank
1. On the other hand, there are rank-1 games with exponentially many equilibria,
which answers an open problem by Kannan and Theobald (Kannan and Theobald
(2010). Games of fixed rank: a hierarchy of bimatrix games. Economic Theory 42).
Hence, the exponential multiplicity of equilibria does not prevent finding one
equilibrium in polynomial time. In this spirit, we conjecture that the
uniqueness of an equilibrium of a rank-1 game is co-NP-hard. For games of higher
rank, the parameterized equilibria of lower rank correspond to the computation
of the global Newton method by Govindan and Wilson (Govindan and Wilson (2003).
A global Newton method to compute Nash equilibria. Journal of Economic Theory
110), which for two-player games is a special case of Lemke’s algorithm.

The talk is based on joint work with Bharat Adsul, Jugal Garg, Ruta Mehta, and
Milind Sohoni (Adsul et al. (2011). Rank-1 bimatrix games: a homeomorphism and a
polynomial time algorithm. Proc. 43rd Annual ACM Symposium on Theory of
Computing (STOC). Adsul et al. (2018). Fast algorithms for rank-1 bimatrix
games. arxiv:1812.04611).


13:00 LUNCH




BCTCS INVITED SPEAKER (E005)




14:30 MARIA CHUDNOVSKY: DETECTING ODD HOLES

A hole in a graph is an induced cycle of length at least four; and a hole is odd
if it has an odd number of vertices. In 2003 a polynomial-time algorithm was
found to test whether a graph or its complement contains an odd hole, thus
providing a polynomial-time algorithm to test if a graph is perfect. However,
the complexity of testing for odd holes (without accepting the complement
outcome) remained unknown. This question was made even more tantalizing by a
theorem of D. Bienstock that states that testing for the existence of an odd
hole through a given vertex is NP-complete. Recently, in joint work with Alex
Scott, Paul Seymour and Sophie Spirkl, we were able to design a polynomial time
algorithm to test for odd holes.


15:30 COFFEE




CONTRIBUTED TALKS I, TRACK A: BOOLEAN NETWORKS (E240)




16:00 ALED WALTERS: SPECIFICATION BASED TESTING IN THE CONTEXT OF RAILWAY
SIGNALLING SYSTEMS

The railway domain is safety-critical: technical failure can lead to financial
loss or the loss of human life. Within railway systems, signalling is an
important safety measure. Its objectives include separation of trains to avoid
train collisions, setting speed restrictions to avoid train derailment, and the
coordination of train and point movement.

The European Rail Traffic Management System (ERTMS) is a modern railway control
system which is based on the communication between trains and interlocking via a
so-called radio block centre (RBC). In our PhD project, we are investigating how
to provide quality control of RBC implementations through specification based
testing. To this end, we formally model and verify the ERTMS. This yields a
specification which has been proven to be correct with regards to selected
safety properties. By the systematic testing of real world implementations of a
RBC against our specification, we then establish that these implementations
conform to our specification. In particular we aim to apply Gaudel’s testing
framework that allows the argument that testing is complete, i.e., testing is
equivalent to performing a formal proof.

In this talk we will discuss a concrete, simple, first model of ERTMS in UPPAAL.
Furthermore, we will give an outlook for our overall project along with our aims
and methodologies moving forward. The research is done in collaboration with our
Industry partner Siemens Rail Automation.


16:30 HANADI ALKHUDHAYR: BOOLEAN NETWORKS COMPOSITION: MULTIPLE ENTITIES AND
MULTIPLE MODELS

In order to study and analyse complex biological systems, a range of qualitative
modelling techniques have emerged such as Boolean Networks, Multi-Valued
Networks and Petri Nets. Boolean networks are a widely used qualitative
modelling approach which can represent the state of each biological component
(e.g. genes, protein and other chemical signals) abstractly as a Boolean value,
where 1 represents the entity is active (e.g. gene is expressed or a protein is
present) and 0 that it is inactive (e.g. gene is inhibited). A key challenge in
analysing Boolean networks is the well-known state space explosion problem which
constrains the applicability of the approach to large realistic systems.

We consider developing compositional techniques to address this limitation by
extending the basic compositional theory developed previously by the authors.
The compositional approach we present is based on composing Boolean networks by
merging entities using a binary Boolean operator (there are 16 such binary
Boolean operators to choose), and we formalise the behaviour preservation under
composition using a notion of compatibility. We consider what it means to
preserve the underlying behaviour of Boolean networks in a composed model when
the composition involves multiple entities and multiple component Boolean
networks.

It turns out that the compatibility property is problematic to check as it
references the full behaviour of the composed model, and so we consider
characterising the preservation of component Boolean network behaviour when the
composition involves multiple entities and multiple component Boolean networks.
We extend the definition of an interference graph to encompass the 16 Binary
Boolean operators. We formalise this interference using a labelled state graph
approach and then use this to define an extended alignment property called weak
alignment. We show formally that weak alignment completely characterises
compatibility when the underlying Boolean operator being used is idempotent.
Therefore, weak alignment property provides an important scalable means of
checking behaviour preservation in composed components without the need to
reference the composed model. The compositional framework presented here is
supported by a prototype tool that automates the composition process and
associated behaviour preservation analysis.


17:00 HANIN ABDULRAHMAN: A FORMAL FRAMEWORK FOR THE DECOMPOSITION AND ANALYSIS
OF BOOLEAN NETWORKS

Qualitative modelling techniques have become a key tool in modelling and
analysing biological systems since they avoid the need for problematic
quantitative data about reaction rates. They are seen as an important approach
to enabling development methods and tools for analysing and synthesising large
biological systems. Boolean Networks (BNs) are a well-known qualitative
modelling approach that has been successfully used to model and analyse complex
biological networks such as gene regulatory networks and signal transduction
networks. However, modelling and analysing large-scale regulatory systems using
BNs is challenging due to the well-known state space explosion problem.

In this PhD research, we are investigating the development of a formal framework
for decomposing BNs to allow the analysis of large-scale models. The aim is to
develop a theoretical framework and practical techniques with tool support to
facilitate the automatic decomposition and analysis of large BNs. The key idea
is to decompose a BN into sub-models by splitting and duplicating entities. This
work builds on the existing compositional framework (developed at Newcastle
University) that allows entities to be composed using Boolean operators. This
presentation will cover our initial work in:
— Decomposing an interaction graph
— Decomposition quality metrics
— Boolean function decomposition


CONTRIBUTED TALKS I, TRACK B: PHYSICS (E005)




16:00 NICHOLAS CHANCELLOR: TRADING OFF OPTIMALITY AND ROBUSTNESS USING QUANTUM
ANNEALING

Quantum annealing is an analog of simulated annealing but where fluctuations are
quantum as well as thermal, the most well know examples of quantum annealing
devices are those produced by D-Wave Systems Inc, which have roughly 2000
qubits. While traditionally, the measure of success of a quantum annealing
protocol is the ability to find optimal solutions, I argue that one could also
consider the ability to find solutions which are both optimal and robust, in the
sense that a small change to a solution is likely to still lead to a good
solution. It is well known that quantum annealing naturally tends to
preferentially find solutions which are nearby (in the Hamming distance sense)
other solutions which are similarly optimal, while this is traditionally viewed
as a weakness in that it can lead to suboptimal solutions, I demonstrate that it
also can be used as a strength.

I will show experimental results which demonstrate how advanced features of the
D-Wave device can be used to convert an optimal solution which is not very
robust to a more robust solution in a controllable way for both binary and mixed
binary/integer synthetic optimization problems. These results suggest a new use
of quantum annealers as a tool for trading off between optimality and robustness
in hybrid quantum/classical algorithms. I will also discuss the advanced
features used in this protocol, in particular the broader context of how the
reverse annealing feature may be used in hybrid quantum/classical algorithms and
give some experimental results in this direction.


16:30 DUNCAN ADAMSON: CRYSTAL STRUCTURE PREDICTION BY VERTEX REMOVAL IN
EUCLIDEAN SPACE

Crystal structure prediction remains a major problem in computational chemistry,
creating an essential barrier in the development of new materials. The problem
is to find the lowest-energy crystal structures which are expected by chemists
as potentially “good” ones.

The approach to crystal structure prediction that we are focusing on is to take
a unit cell that has been densely packed with ions, then to remove ions from it.
Our goal in the approach is to minimise the total energy with the constraint
that the final charge is neutral, leaving us with a structure that may be close
to a realistic structure. The motivation for this approach from a computational
perspective is twofold, first it is comprised of a simple base operation – that
being the removal of an ion – which we may reason about. Secondly we can combine
this approach with the opposite operation of inserting ions to reach any
potential structure, allowing us to reason about the optimal arrangement for the
crystal.

In this talk we will discuss this problem through the lens of graph theory,
representing the structure as a set of vertices with a positive or negative
charge embedded into 3-dimensional Euclidean space forming a complete graph with
edges having weights corresponding to the energy between the ions in the
structure. We will present a formal definition of our problem based on this
input, as well as a class of functions which we may use to define the energy
between ions. Finally we will conclude by showing that this problem is
NP-Complete by a reduction from the Clique Problem for both the general case,
and when we restrict it to the Buckingham-Coulomb Potential function used in
chemistry.


17:00 TYSON JONES: VARIATIONAL ALGORITHMS FOR NOISY QUANTUM COMPUTERS

With large, error-corrected quantum computers still a distant dream, what fun
can be had with the smaller, error-prone, near-future machines? Even noisy
classical-quantum hybrids can be used to study quantum systems too large to be
simulated on a classical computer. Variational algorithms, whereby a classical
processor assists a quantum coprocessor to optimise a set of parameters, show
promising noise resilience. While challenging to scale, they enable us to
simulate quantum dynamics and even solve chemistry problems of intermediate
scale. I’ll introduce solving optimisation problems with variational algorithms,
discuss the edge they give us in simulating dynamics, and how we’ve re-purposed
them to study energy spectra and perform quantum circuit recompilation.

The talk is based on joint work with Suguru Endo, Sam McArdle, Xiao Yuan, Ying
Li and Simon Benjamin.


17:30 BCTCS ORDINARY GENERAL MEETING (30 MINUTES)




19:00 CONFERENCE BANQUET




WEDNESDAY 17 APRIL


CONTRIBUTED TALKS II, TRACK A: MACHINE LEARNING AND GRAPH PROGRAMMING (E240)




09:00 ABDUL GHANI: THE SUM-OF-SQUARES HIERARCHY AND PROOF COMPLEXITY

The Sum-Of-Squares hierarchy – also known as the Lasserre or Positivstellensatz
hierarchies – is a method of generating semidefinite relaxations of polynomial
optimization problems. It has found applications in a wide range of diverse
fields, from robotics and control to the theory of constraint satisfaction.

In this talk we will view the Sum-Of-Squares hierarchy as a propositional proof
system – i.e., as a method of demonstrating the unsatisfiability (or the
tautologyhood) of propositional statements. In particular we will be concerned
with the complexity of refuting translations of unsatisfiable (in finite
domains) first-order combinatorial statements. This complexity is very
intimately related with the degree of the relaxations.

This introductory talk will begin with relevant background and definitions. This
will be followed by some key results and applications of Sum-Of-Squares
throughout theoretical computer science. We will finish by discussing its
relationship to proof complexity, including some recent results and open
questions.

This talk is based on joint work with Stefan Dantchev and Barnaby Martin.


09:30 BRIAN COURTEHOUTE: RULE-BASED GRAPH PROGRAMMING

GP 2 is a computationally complete, non-deterministic language based on rules.
In a given host graph, one looks for a match of a specific graph, and replaces
that match with another graph. The existence and uniqueness of these
constructions is based on double-pushouts from category theory. It has
relatively simple syntax and semantics, which helps making complexity or
verification arguments, and comes with a compiler creating C code.

In general, there is a significant cost to matching a graph in the host graph.
To lessen that cost, the language has been expanded to rooted graphs. Some nodes
are distinguished as root nodes that can be accessed in constant time. They can
be thought of as the nodes that are marked as the “current node” in
illustrations of graph algorithms.

In this talk, we discuss some examples of rooted GP 2 programs. In particular,
we will focus on identifying connected components, topological sorting and
2-colouring.

This is joint work with Detlef Plump.


10:00 ARVED FRIEDEMANN: VERIFIABLE MODELS TO MACHINE LEARNING TASKS USING SAT

Machine learning (ML) provides state-of-the-art performance in many tasks where
the rules and representations required for classification cannot be described
manually. However, due to the complexities in describing all potential behaviour
for an input distribution, their use in safety critical systems is limited. For
ML to be applicable for such systems, the robustness against noise,
reachability, and similar must be verified. On the other hand, Logical formulas,
like circuit representations, are verifiable and can be inferred using synthesis
techniques. We propose the usage of a Boolean Satisfiability (SAT) solver that
uses unit propagation and clause learning to create a logical model that fits
given observations and verifies safety criteria. In doing so, we show
interesting comparisons between concepts implemented in SAT and ML that allows
for further research. Our implementation is tested on a classic MNIST dataset
where evaluations of training time, accuracy, and verifiability are made.

This is joint work with Jay Morgan


10:30 ELENA ZAMARAEVA: ON THE NUMBER OF 2-THRESHOLD FUNCTIONS

We consider 2-threshold functions over a 2-dimensional integer grid of a fixed
size MxN, that is the functions which can be represented as the conjunction of
two threshold functions. The asymptotic on the number of threshold functions is
known to be 6M2N2π2+O(MN2) (P. Haukkanen, J. K. Merikoski. Asymptotics of the
number of threshold functions on a two-dimensional rectangular grid. Discrete
Applied Mathematics, 161, 2013: 13-18). It immediately gives an upper bound on
the number of 2-threshold functions which is 18M4N4π4+O(M3N4). In this work we
provide an asymptotic formula for the number of 2-threshold functions. To
achieve this goal we establish a one-to-one correspondence between almost all
2-threshold functions and pairs of integer segments with specific properties
which we call proper pairs of segments. We expect this bijection to be useful in
algorithmic studies of 2-threshold functions.


CONTRIBUTED TALKS II, TRACK B: GRAPH THEORY AND ALGORITHMS I (E005)




09:00 FRANCES COOPER: RANK-MAXIMAL STABLE MATCHINGS IN THE STABLE MARRIAGE
PROBLEM

In 1962, Gale and Shapley described the Stable Marriage problem (SM) in their
seminal paper “College Admissions and the Stability of Marriage”. In this
setting we have two sets of agents: men and women, where each man has a
strictly-ordered preference list over a subset of women and vice versa. We seek
a stable matching, that is, an assignment of men to women such that no man-woman
pair (who are not currently assigned each other) would prefer to be assigned to
each other than their current partners (if any).

Each instance of SM has at least one stable matching, and may indeed have many.
Although all stable matchings for an instance have the same size, they may
differ greatly in terms of the happiness of individual agents. This motivates us
to find a stable matching with additional properties. One such example is a
rank-maximal stable matching, which is a stable matching such that the greatest
number of men and women gain their first-choice partner, and subject to that,
their second choice, and so on. A polynomial-time algorithm for computing such a
matching was described by Gusfield and Irving in 1989. However, this algorithm
relies upon exponential weights which, when used in practice, may result in
overflows or memory issues. In this talk I describe a new polynomial-time
algorithm to compute a rank-maximal stable matching using a combinatorial
approach, without the need to revert to exponential weights.

The talk is based on joint work with David Manlove.


09:30 SOFIAT OLAOSEBIKAN: STRONGLY STABLE MATCHINGS IN THE STUDENT PROJECT
ALLOCATION PROBLEM WITH TIES

The Student-Project Allocation problem with lecturer preferences over Students
(SPA-S) involves assigning students to projects based on student preferences
over projects, lecturer preferences over students, and the maximum number of
students that each project and lecturer can accommodate. This classical model
assumes that preference lists are strictly ordered. In this work, we study a
variant of SPA-S where ties are allowed in the preference lists of students and
lecturers, which we refer to as the Student-Project Allocation problem with
lecturer preferences over Students with Ties (SPA-ST). In this context, what we
seek is a stable matching of students to projects. However, when the preference
lists involve ties, three different stability concepts naturally arise; namely,
weak stability, strong stability and super-stability.

In this talk, I will focus on the concept of strong stability in SPA-ST.
Further, I will describe a polynomial-time algorithm to find a strongly stable
matching or to report that no such matching exists, given an instance of SPA-ST.
Our algorithm runs in O(m2) time, where m is the total length of the students’
preference lists.

The talk is based on joint work with David Manlove.


10:00 MICHELE ZITO: BOOLEAN NETWORK TOMOGRAPHY FOR NODE FAILURE IDENTIFICATION
IN LINE OF SIGHT NETWORKS

A central issue in the study of communication networks is that of constantly
ensuring that the structure works reliably. Network Tomography is a family of
distributed failure detection algorithms based on the spreading of end-to-end
measurements [1,7] along the network edges. In particular, Boolean Network
Tomography, (BNT for short), uses boolean measurements. Introduced in [2,5], BNT
has recently attracted a lot of interest [4,6] given the importance for example
of recognizing failing nodes or links in a network, and given the simple
measurements (1 failing/ 0 working) we run through the network.

In this work we use BNT to identify corrupted/failing nodes. We study the
identifiability of simultaneous node failures in Line-of-Sight (LoS for short)
networks. LoS networks were introduced by Frieze et al. in [3] and have been
widely studied ever since as they can be used to model communication patterns in
a geometric environment containing obstacles. Like grids, LoS networks can be
embedded in a finite cube of Zd, for some positive integer d. But LoS networks
generalize grids in that edges are allowed between nodes that are not
necessarily next to each other in the network embedding. The main contribution
of our work is the analysis of a strategy to identify large sets of
simultaneously failing nodes in LoS networks. Using the network
vertex-connectivity, κ(G), (i.e. the size of the minimal set of node
disconnecting the graph) we are able to prove that the maximum number of
simultaneously failing nodes that can be detected in any LoS network G using BNT
is at least κ(G)−1

The talk is based on joint work with: Nicola Galesi, and Fariba Ranjbar.

 1. M. Coates, A. O. Hero, R. Nowak, and B. Yu. Internet tomography. IEEE Signal
    Processing Magazine, 19:47–65, 2002.
 2. N. G. Duffield. Network tomography of binary network performance
    characteristics. IEEE Transactions on Information Theory, 52(12):5373–5388,
    2006.
 3. A. M. Frieze, J. M. Kleinberg, R. Ravi, and W. Debany. Line-of-sight
    networks. In Proc. 18th SODA, pp 968–977, ACM, 2007.
 4. N. Galesi and F. Ranjbar. Tight bounds for maximal identifiability of
    failure nodes in boolean network tomography. In Proc. 38th ICDCS, pp.
    212–222, IEEE, 2018.
 5. D. Ghita, C. Karakus, K. Argyraki, and P. Thiran. Shifting network
    tomography toward a practical goal. In Proc. of CoNEXT ¢11, pp 24:1–24:12,
    ACM. 2011.
 6. L. Ma, T. He, A. Swami, D. Towsley, and K. K. Leung. Network capability in
    localizing node failures via end-to-end path measurements. IEEE/ACM
    Transactions on Networking, 25(1):434–450, 2017.
 7. Y. Vardi. Network tomography: Estimating source-destination traffic
    intensities from link data. Journal of the American Statistical Association,
    91(433):365–377, 1996.


10:30 RAJESH CHITNIS: TOWARDS A THEORY OF PARAMETERIZED STREAMING ALGORITHMS

Parameterized complexity attempts to give a more fine-grained analysis of the
complexity of problems: instead of measuring the running time as a function of
only the input size, we analyze the running time with respect to additional
parameters. This approach has proven to be highly successful in delineating our
understanding of NP-hard problems. Given this success with the TIME resource, it
seems but natural to use this approach for dealing with the SPACE resource.
First attempts in this direction have considered a few individual problems, with
some success: Fafianie and Kratsch [MFCS’14] and Chitnis et al. [SODA’15]
introduced the notions of streaming kernels and parameterized streaming
algorithms respectively. For example, the latter shows how to refine the Ω(n2)
bit lower bound for finding a minimum Vertex Cover (VC) in the streaming setting
by designing an algorithm for the parameterized k-VC problem which uses
O(k2logn) bits.

In this work, we initiate a systematic study of graph problems from the paradigm
of parameterized streaming algorithms. We first define a natural hierarchy of
space complexity classes of FPS, SubPS, SemiPS, SupPS and BrutePS, and then
obtain tight classifications for several well-studied graph problems such as
Longest Path, Feedback Vertex Set, Dominating Set, Girth, Clique, Treewidth,
etc. into this hierarchy. On the algorithmic side, our parameterized streaming
algorithms use techniques from the FPT world such as bidimensionality, iterative
compression and bounded-depth search trees. On the hardness side, we obtain
lower bounds for the parameterized streaming complexity of various problems via
novel reductions from problems in communication complexity. We also show how
(conditional) lower bounds for kernels and W-hard problems translate to lower
bounds for parameterized streaming algorithms.

Parameterized algorithms and streaming algorithms are approaches to cope with
TIME and SPACE intractability respectively. It is our hope that this work on
parameterized streaming algorithms leads to two-way flow of ideas between these
two previously separated areas of theoretical computer science.

The talk is based on joint work with Graham Cormode


11:00 COFFEE




BCTCS INVITED SPEAKER (E005)




11:30 ULRICH BERGER: IFP – A LOGIC FOR PROGRAM EXTRACTION

That proofs correspond to computation is an old idea of logic going back to
Kolmogorov, Brouwer, Boole and many others. In computer science this is known as
the Curry-Howard correspondence which identifies the typed lambda-calculus both
as a notation system for proofs and a programming language.

A related – and regarding specification more expressive – correspondence between
proofs and programs is provided by Kleene’s realizability which can be viewed as
a computationally enriched semantics of logic and arithmetic. Kleene proved the
fundamental Soundness Theorem which shows that from a constructive proof of a
formula one can extract an algorithm that realizes the proven formula, that is,
solves the computational problem it expresses.

Realizability offers an attractive alternative to program verification: Instead
of first writing a program and then trying to prove that it meets its
specification, one first proves that the specification is satisfiable and then
extracts a program (provably) satisfying it.

The advantages of program extraction over verification are obvious: While it is
in general undecidable whether or not a program meets its specification, the
correctness of a proof can be easily checked. Hence program extraction not only
yields correct programs but also provides formal correctness proofs for free.

Despite the virtues of program extraction, the currently dominating technologies
for producing formally certified software are based on traditional
‘programming-first’ methodologies such as model-checking and formal
verification. The main reasons are that program extraction is considered too
restricted, since it covers only functional programming but misses out on many
other computational paradigms, and too complicated and demanding for software
developers to be of practical relevance. This talk will present a logical system
IFP (Intuitionistic Fixed point Logic) as a basis for program extraction that
addresses these issues.

After a brief introduction to IFP an overview of the main case studies – most of
them implemented in the Minlog proof system – is given. This is followed by an
outline of new developments aiming to extend program extraction to capture
non-functional computational paradigms such as concurrency and state based
computation.

The talk is based on joint work with Tie Hou, Alison Jones, Andrew Lawrence,
Kenji Miyamoto, Fredrik Nordvall-Forsberg, Olga Petrovska, Helmut
Schwichtenberg, Monika Seisenberger, Hideki Tsuiki, and others.


12:30 LUNCH




CONTRIBUTED TALKS III, TRACK A: ONLINE ALGORITHMS AND RAILWAY APPLICATIONS
(E240)




14:00 CHRISTIAN COESTER: THE ONLINE K-TAXI PROBLEM

In the k-taxi problem, a generalization of the k-server problem, an algorithm
controls k taxis to serve a sequence of requests in a metric space. A request
consists of two points s and t, representing a passenger that wants to be
carried by a taxi from s to t. For each request and without knowledge of future
requests, the algorithm has to select a taxi to transport the passenger. The
goal is to minimize the total distance traveled by all taxis. The problem comes
in two flavors, called the easy and the hard k-taxi problem: In the easy k-taxi
problem, the cost is defined as the total distance traveled by the taxis; in the
hard k-taxi problem, the cost is only the distance of empty runs.

The easy k-taxi problem is exactly equivalent to the k-server problem. The talk
will focus mostly on the hard version, which is substantially more difficult.
For hierarchically separated trees, I will present a memoryless randomized
algorithm with optimal competitive ratio 2k−1. This implies an
O(2klogn)-competitive algorithm for arbitrary n-point metrics, which is the
first competitive algorithm for the hard k-taxi problem for general finite
metric spaces and general k. I will also describe the main ideas of an algorithm
based on growing, shrinking and shifting regions which achieves a constant
competitive ratio for three taxis on the line (abstracting the scheduling of
three elevators).

The talk is based on joint work with Elias Koutsoupias (Coester and Koutsoupias.
The online k-taxi problem. In Symposium on Theory of Computing, STOC 2019. To
appear.)


14:30 PAVEL VESELÝ: A Φ-COMPETITIVE ALGORITHM FOR SCHEDULING PACKETS WITH
DEADLINES

In the online packet scheduling problem with deadlines, the goal is to schedule
transmissions of packets that arrive over time in a network switch and need to
be sent across a link. Each packet has a deadline, representing its urgency, and
a non-negative weight, that represents its priority. Only one packet can be
transmitted in any time slot, so, if the system is overloaded, some packets will
inevitably miss their deadlines and be dropped. In this scenario, the natural
objective is to compute a transmission schedule that maximizes the total weight
of packets which are successfully transmitted. The problem is inherently online,
with the scheduling decisions made without the knowledge of future packet
arrivals. The goal is to develop an algorithm with the lowest possible
competitive ratio, which is the worst-case ratio between the optimum total
weight of a schedule (computed by an offline algorithm) and the weight of a
schedule computed by a (deterministic) online algorithm.

In this talk, we present a ϕ-competitive online algorithm (where ϕ≈1.618 is the
golden ratio), matching the previously established lower bound. We then outline
main ideas of its analysis and discuss possible extensions of this work.

The talk is based on joint work with Marek Chrobak, Lukasz Jez, and Jiri Sgall.


15:00 YONG ZHANG: HYBRID ONLINE SAFETY OBSERVER FOR TRAIN CONTROL SYSTEM
ON-BOARD EQUIPMENT

The on-board equipment of train control plays a key role in protecting trains
from over-speeding. The demand of fully automatic operation puts forward higher
requirements for the precise control of railway signalling systems. Conventional
over-speed protection methods monitor the speed at discrete time instants.
However, the over-speed behavior between discrete time instants cannot be
detected, which may cause potential risks.

To address this problem, we propose a hybrid safety observation method to
monitor train speed throughout the operation. In the proposed method, train
behavior is modeled with a hybrid automata, which takes parameters that affect
train movement into consideration. The parameters can be updated along with the
movement of the train. Consequently,the train behavior in the next control
cycles can be captured continuously by computing the reachable set of the model.
The safety properties are also classified as a set of states. This set is
obtained according to the dynamical and static speed limits regarding various
over-speed protection principles. The intersection check between the reachable
set and safety property set is performed based on the translated space
representations. Finally, we make a judgement on whether the train speed could
exceed the speed limit in a short future period.

In order to illustrate our proposed method, we present a case study based on the
real engineering data from the Chinese railway industry. A plant train model is
constructed according to the specific train dynamics. A simplified over-speeding
control logic is adopted in the case of the predefined scenario. The results
show that the observation method is feasible in verifying the violations of
CTCS-3 on-board equipment safety properties.


5:30 FILIPPOS PANTEKIS: GRAPH LAYOUT ALGORITHMS FOR RAILWAYS




CONTRIBUTED TALKS III, TRACK B: GRAPH THEORY AND ALGORITHMS II (E005)




14:00 ELEKTRA KYPRIDEMOU: CORRELATION OF METRICS IN RANDOM GRAPHS

In this work, we are interested in the study of the pair-wise correlations
between several metrics of random graphs of small order, which are embedded in a
two-dimensional Euclidean plane. We present a preliminary investigation of 11
measures in two well-known models of random graphs: the Erdos-Renyi graphs
(which we assume randomly embedded in a circle C(O,R) of centre O(0,0) and
radius R) and random geometric graphs (RGG) over the same C(O,R). Relevant
descriptive measures include the size of the smallest dominating sets, the
clique and independence numbers, the number of connected components, and
measures of the graph clustering properties.

Studying such correlations analytically is very difficult. We review the
existing literature on correlation analysis inspired by real-world complex
networks, such as the Internet [2] and social networks [3]. This work, however,
is motivated by applications in Psychology, where one is interested in the
perceived numerosity of configurations of dots spread in a two-dimensional
surface [1]. In applications like these, correlations could also be useful to
answer questions about one measure based on the properties of another, related
one.

After analysing the pair-wise Spearman’s rank correlation coefficients of the
measures mentioned above, we find that strong relationships exist between groups
of measures, both in the Erdos-Renyi random graphs and in the RGGs. Not
surprisingly, however, these correlations highly depend on the main graph
parameters of the two models (the p value in Gn,p and the d value in Gd).

The talk is based on joint work with Marco Bertamini, Martin Guest, and Michele
Zito.

 1. M. Bertamini, M. Zito, N. E Scott-Samuel and J. Hulleman. Spatial clustering
    and its effect on perceived clustering, numerosity, and dispersion.
    Attention, Perception, and Psychophysics, 78(5):1460–1471, 2016.
 2. A. Garcia-Robledo, A. Diaz-Perez and G. Morales-Luna. Correlation analysis
    of complex network metrics on the topology of the internet. In 2013 10th
    International Conference and Expo on Emerging Technologies for a Smarter
    World (CEWIT), 1–6. IEEE, 2013.
 3. J. D. Guzman, R. F. Deckro, M. J. Robbins, J. F. Morris and N. A. Ballester.
    An analytical comparison of social network measures. IEEE Transactions on
    Computational Social Systems, 1(1):35–45, 2014.


14:30 MATTHEW ENGLAND: CYLINDRICAL ALGEBRAIC DECOMPOSITION WITH EQUATIONAL
CONSTRAINTS

Cylindrical Algebraic Decomposition (CAD) has long been one of the most
important algorithms within Symbolic Computation, as a tool to perform
quantifier elimination in first order logic over the reals. Its worst case
complexity bound is doubly exponential in the number of variables which limits
its use in practice, however, it has found application throughout science and
engineering, with published applications as diverse as biological network
analysis and economic hypothesis refutation. In particular, it is finding
prominence in the Satisfiability Checking community as a subtool to check the
satisfiability of formula in nonlinear real arithmetic in larger SMT-solver
problem instances.

The original CAD algorithm produces decompositions according to the signs of
polynomials, when what is usually required is a decomposition according to the
truth of a formula containing those polynomials. One approach to achieve that
coarser (but hopefully cheaper) decomposition is to reduce the polynomials
identified in the CAD to reflect a logical structure which reduces the solution
space dimension: the presence of Equational Constraints (ECs). This talk will
summarise recent progress on the theory of CAD with ECs. In paricular, we
analyse the effect of ECs on the complexity of CAD and demonstrate that the
double exponent in the worst case complexity bound reduces in line with the
number of ECs.

The talk is based on joint work with James Davenport and Russell Bradford
(University of Bath). It is based on the published conference papers [1,2] and
the upcoming journal paper [3].

 1. M. England, R. Bradford and J.H. Davenport. Improving the use of equational
    constraints in cylindrical algebraic decompositions. Proceedings of the 40th
    International Symposium on Symbolic and Algebraic Computation (ISSAC ’15),
    165–172. ACM, 2015.
 2. M. England and J.H. Davenport. The complexity of cylindrical algebraic
    decomposition with respect to polynomial degree. In: V.P. Gerdt, W. Koepf,
    W.M. Seiler and E.V. Vorozhtsov, eds. Computer Algebra in Scientific
    Computing (Proc. CASC ’16), 172-192. (Lecture Notes in Computer Science,
    9890). Springer International, 2016.
 3. M. England, R. Bradford and J.H. Davenport. Cylindrical Algebraic
    Decomposition with Equational Constraints. To appear in the Journal of
    Symbolic Computation, 2019.


15:00 GIACOMO PAESANI: ON THE PRICE OF INDEPENDENCE FOR GRAPH TRANSVERSALS

We compare the minimum value of classical graph transversals vertex cover,
feedback vertex set and odd cycle transversal with the minimum value of the
corresponding variants, in which we require the transversal to be an independent
set. That is, let vc(G), fvs(G) and oct(G) denote, respectively, the size of a
minimum vertex cover, minimum feedback vertex set and minimum odd cycle
transversal in a graph G. One can then ask, when looking for these sets in a
graph, how much bigger might they be if we require that they are independent;
that is, what is the price of independence? If G has a vertex cover, feedback
vertex set or odd cycle transversal that is an independent set, then we let,
respectively, ivc(G), ifvs(G) or ioct(G) denote the minimum size of such a set.

First, we investigate for which graphs H the values of ivc(G), ifvs(G) and
ioct(G) are bounded in terms of vc(G), fvs(G) and oct(G), respectively, when the
graph G belongs to the class of H-free graphs. We find complete classifications
for vertex cover and feedback vertex set and an almost complete classification
for odd cycle transversal (subject to three open cases).

Second, we investigate for which graphs H the values of ivc(G), ifvs(G) and
ioct(G) are equal to vc(G), fvs(G) and oct(G), respectively, when the graph G
belongs to the class of H-free graphs. We find complete classifications for
vertex cover and almost complete classifications for feedback vertex set
(subject to one open case) and odd cycle transversal (subject to three open
cases).

The talk is based on joint work with: Konrad K. Dabrowski, Matthew Johnson,
Daniel Paulusma, Viktor Zamaraev.


15:30 BENJAMIN MERLIN BUMPUS: THE WIDTH OF MINIMUM COST TREE DECOMPOSITIONS

Tree decompositions have been very successfully employed in the design of
parameterized graph algorithms. Typically the running time of such algorithms
depends on the width of the decomposition provided: i.e the size of its largest
bag. For this reason much effort has been directed towards finding tree
decompositions with minimum width. However, this is not the right approach: the
width of a tree decomposition which minimizes the the running time of some
algorithm is not always minimum.

This talk will will address progress related to the question: “is the width of
an ‘algorithmically best’ tree decomposition bounded with respect to treewidth?”


IMPORTANT DATES

 * Abstract Submission: 1 March 2019
 * Notification: 8 March 2019
 * Registration: 15 March 2019

Scam Alert: We are aware that invited speakers and delegates may be contacted by
individuals claiming to be affiliated with BCTCS and offering assistance with
accommodation bookings. Please be aware that this is a scam operated by a
company calling themselves Business Travel Management. You will never receive
any unsolicited contact from the organizers promoting accommodation.


SPONSORS




DURHAM

The conference is hosted by Algorithms and Complexity in Durham (ACiD), a
research group in the Department of Computer Science.

Organizing Committee

 * Matthew Johnson (Durham)
 * Barnaby Martin (Durham)
 * George Mertzios (Durham)
 * Daniel Paulusma (Durham)

Travel


Durham is easy to reach by train with frequent services to London (about 3
hours) and Edinburgh (about 2 hours). Newcastle Airport also has good
connections to Durham.

Here is further information on travelling to Durham. Here is an
annotated map for finding your way in the city: expand the lists in the margin
and click on Department of Computer Science to find the building where the
meeting takes place. Here is a better map to print: the Department of Computer
Science is building 14 and Collingwood College is 6.

Residential delegates can go first to Collingwood College if they wish to leave
luggage there (but rooms might not be available before 2pm). Note that the
college reception is open from 8am to 5pm on weekdays. At other times a duty
porter is available. If he is not at reception, there is a phone and
instructions to contact him.

Residential delegates who will arrive by car should go first to Collingwood
College and register their car with the college reception before using the
college car park (spaces are subject to availability). The university does not
provide parking for non-residential delegates. The Durham City Park and Ride at
Howlands Farm is a 15 minute walk (or a short bus journey) from the Department.

PREVIOUS CONFERENCES

 * 1985 – BTCSC 1 – University of Leeds, 1-3 April 1985
 * 1986 – BTCSC 2 – University of Warwick, 24-26 March 1986
 * 1987 – BCTCS 3 – University of Leicester, 13-15 April 1987
 * 1988 – BCTCS 4 – University of Edinburgh, 28-31 March 1988
 * 1989 – BCTCS 5 – Royal Holloway and Bedford New College, 11-13 April 1989
 * 1990 – BCTCS 6 – University of Manchester, 28-30 March 1990
 * 1991 – BCTCS 7 – University of Liverpool, 26-28 March 1991
 * 1992 – BCTCS 8 – University of Newcastle, 24-26 March 1992
 * 1993 – BCTCS 9 – University of York, 28-31 March 1993
 * 1994 – BCTCS 10 – University of Bristol, 28-30 March 1994
 * 1995 – BCTCS 11 – University of Wales Swansea, 2-5 April 1995
 * 1996 – BCTCS 12 – University of Kent at Canterbury, 1-4 April 1996
 * 1997 – BCTCS 13 – University of Sheffield, 23-26 March 1997
 * 1998 – BCTCS 14 – University of St. Andrews, 31 March – 2 April 1998
 * 1999 – BCTCS 15 – Keele University 14-16th April 1999
 * 2000 – BCTCS 16 – University of Liverpool 10-12th April 2000
 * 2001 – BCTCS 17 – University of Glasgow, 9-12th April 2001
 * 2002 – BCTCS 18 – HP Laboratories Bristol, 7-10 April 2002
 * 2003 – BCTCS 19 – University of Leicester, 7-9 April 2003
 * 2004 – BCTCS 20 – University of Stirling, 5-8 April 2004
 * 2005 – BCTCS 21 – University of Nottingham, 21-24 March 2005
 * 2006 – BCTCS 22 – Swansea University, 4-7 April 2006
 * 2007 – BCTCS 23 – Oxford University, 2-5 April 2007
 * 2008 – BCTCS 24 – Durham University, 7-10 April 2008
 * 2009 – BCTCS 25 – Warwick University, 6-9 April 2009
 * 2010 – BCTCS 26 – Edinburgh University, 6-9 April 2010
 * 2011 – BCTCS 27 – Birmingham University, 18-21 April 2011
 * 2012 – BCTCS 28 – Manchester University, 2-5 April 2012
 * 2013 – BCTCS 29 – University of Bath, 24-27 March 2013
 * 2014 – BCTCS 30 – Loughborough University, 9-11 April 2014
 * 2015 – BCTCS 31 – Middlesex University, 14-18 September 2015
 * 2016 – BCTCS 32 – Queen’s University Belfast, 22-24 March 2016
 * 2017 – BCTCS 33 – University of St Andrews, 26-28 April 2017
 * 2018 – BCTCS 34 – Royal Holloway University of London, 26-28 March 2018

BCTCS ALGOUK 2019