Digilogue Technologies
Innovative Software Engineering Solutions & Enterprise Consulting
Founded and led by seasoned software consultant Gary Black of Digilogue
Technologies Ltd., based in Toronto, Ontario, Canada, specializes in
microservices, distributed systems, AI-driven architecture, and
cloud-native development.
With over 20 years of experience in engineering, high-performance
computing and enterprise architecture, Gary delivers solutions that help
businesses scale, optimize, and innovate — each crafted with precision
and deep technical insight.
Software Engineering Technologies and Skillset
This section provides a deeper look into Digilogue’s core areas
of expertise—offering clients a clearer understanding of the
capabilities and services available. It can be thought of as an
extended resume.
-
Domains (Business/Technologies)
-
Software Development and Tooling
-
Infrastructure and Configuration
-
Software Testing and Documentation
-
Software Development Methodologies
-
Architecture and Design Approach
-
Project Discovery
- Project IDP
- Cloud
- DevOps
- Artificial Intelligence
Domains (Business/Technologies)
-
Finance, commencial banking, payments, loans, deposits, credit
cards, technologies and operations.
-
In-house trading, investment and development platforms across
multiple asset classes.
-
Platform engineering and data platform engineering domains.
-
Home office and policing including various backend systems
integrations such as the police national computer database.
-
Web application development for various CRM systems including
print, recruitment and business networking.
-
Electronic manufacturing services developing in-circuit and
functional test solutions for manufactured product (including: large
telecommunication backplanes, PC peripherals and server
motherboards).
Software Development and Tooling
-
Core programming languages: Java, Javascript, PHP, Python and
R.
-
Software development stack: back-end services (microservices APIs,
schedulers, async messaging), front-end (JavaFX GUI, Web
SPAs).
-
Database development includes: mySQL (mariaDB), SQL Server, Oracle,
Postgres, MongoDB and ElasticSearch non-relational/NoSQL
solutions.
-
Cooperating processes and concurrency through multi-threaded
techniques.
-
Frameworks including Spring Boot and EE Java.
-
Service communication via HTTP, gRPC web sockets and low-level BSD
sockets.
-
Styles include JSON over REST, XML over SOAP and bespoke
application protocols.
-
Leetcode 75 and 150 challenges.
Infrastructure and Configuration
-
TLS, DNS and DDNS configurations.
-
Sub domain routing to end services via web service.
-
Basic Authentication, Bearer Token with JWT, API/Session keys,
OAuth 2.0, mTLS and cookies.
-
Linux, Router and Apache Web configurations with virtual
hosts.
-
Traffic Mangement and telemetry configuration via Kialli (for
Istio) and Vizceral + configuration with end services.
-
Service mesh with Istio and various offerings with Consul and
Zookeeper such as service discovery, app configuration KV store and
health checks.
-
Comprehensive health check capability for application health, DB
state and NAS mounts via scripts, remote SSH, built-in bespoke app
health checks and web APIs.
-
Secure credential management store with HashiCorp Vault.
-
API Gateway and PaaS configuration for services.
-
Mail configuration for applications via SMTP and IMAP.
-
Local on-prem data center assembly and configuration including: x14
Linux devices, x2 processing rigs, x2 DBMS servers, x2 web servers,
power station, UPSs, NAS drives + associated networking and routing
equipment.
Software Testing and Documentation
-
Utilizing various test strategies by evaluating test pyramid on a
project by project basis.
-
Low level unit testing with x-unit libraries.
-
Very occasional mocking of unit tests via Mockito.
-
Service level integration tests driven by JUnit, often requiring
infra/service dependencies. Can be integrated into pipeline or run
locally.
-
End to end integration tests driven by JUnit or driver, requiring
infra/service dependencies. Can be integrated into pipeline or run
locally.
-
Manual exploritory testing. Sanity testing stratigies utilizing
all methods.
-
Performance testing via bespoke setup (piggying back on
end-to-end solutions) or utilizing tools such as JMeter.
-
Benchmark testing utilizing various techniques.
-
JavaDocs used primarily for Java documentation.
-
README.md markdown for git repos.
-
Wiki such as confluence, bespoke wiki and wiki markup.
-
DocBook for release notes + others.
Software Development Methodologies
-
Project level: Scrum (utilizing scrum master principles for
interacting with key stakeholders) and making efficient use of
ceremonies, Kanban Lite, x2 pizza teams, wiki.
-
Development level: XP, pair-programming (pair and split), code
reviews, definition of done lists, ask anything anytime, full
collaboration and socialising of key ideas.
-
Ensure access and setup of all environments as needed (dev, test,
pre-prod, prod etc.) to satisfy all development and testing
requirements for successful delivery.
Architecture and Design Approach
-
Two Key Styles of Functional Design Commonly Employed:
-
Big Bang / Upfront Design
This approach involves comprehensive, well-thought-out design
from the outset. It is particularly useful for larger projects
where a clear direction is needed early in the lifecycle,
especially during inception.
-
Iterative / Incremental Design:
This approach involves short, iterative bursts of design and
development. It is well-suited to projects with evolving or
loosely defined requirements and aligns closely with Agile and
Scrum methodologies.
-
In practice, both approaches can be valuable and are often used in
combination within a single project. The key lies in remaining
adaptable and responsive to change, while gathering as much relevant
information upfront as possible.
-
Non-functional requirements are always given high priority.
Performance, Logging, Security, Scalability, Redundancy, Resiliency,
and Availability are at the forefront of consideration. In addition,
Maintainability, Testability, Usability, Interoperability,
Compliance, Observability, and Cost-efficiency are often essential
factors depending on the system’s context and
objectives.
-
Digilogue's approach to System Design begins with thoughtful
consideration—pondering ideas, identifying potential
challenges, and evaluating trade-offs. This often involves sketching
out architectural topologies, drafting system interaction diagrams,
and mapping out data flow and component responsibilities. These
early design artifacts serve as a foundation for deeper technical
exploration.
-
System design is not a solitary exercise; it thrives on
collaboration. By inviting discussion through design reviews,
technical huddles, and whiteboarding sessions, teams can surface
blind spots, incorporate diverse expertise, and collectively arrive
at well-reasoned solutions. Input from developers, architects,
security specialists, operations engineers, and other technical
stakeholders is not just welcome—it’s essential.
-
Getting system design right is critical. It directly impacts
scalability, maintainability, resilience, and performance. A
well-designed system enables teams to move faster, adapt more
easily, and handle growth with confidence. It is a highly technical,
iterative, and strategic discipline that sets the tone for
everything that follows in the development lifecycle.
-
Common areas that impact the broader enterprise—such as
coding standards, architectural styles, or design
conventions—are always brought to the attention of the wider
enterprise architecture group (where they make sense) for alignment
and agreement. For example, having four different approaches to
handling datetime across systems is something that can present
challenges down the line.
Project Discovery
-
Project Discovery is a home-grown initiative led by Gary and
developed exclusively in-house at Digilogue. Its primary objective
was—and continues to be—to generate gains in open spot
markets, ultimately resulting in a suite of services designed for
investment and trading.
-
From a top-down perspective, the project can be viewed as
comprising two distinct categories:
-
Discovery – This phase focused on the inception, design,
and exploration of new time-series algorithms. It leveraged a
subset of artificial intelligence known as evolutionary
computation, where problems are framed as single-objective
search-based optimization tasks. This approach enabled the
identification of optimal parameter sets for newly developed
algorithms.
-
Engineering – Once algorithms were discovered and
fine-tuned, the remainder of the work transitioned into
engineering. This included porting the finalized algorithms into
service bot implementations and deploying them to target
environments via CI/CD pipelines. The engineering phase also
encompassed all supporting activities—from infrastructure
setup and configuration to the assembly of hardware and
networking within the local on-premises data center. It served
as a broad catch-all for everything required to operationalize
the system.
-
The project also benefited from a valued part-time collaboration
with two former colleagues. One contributed primarily to the
engineering effort, developing graphing and visualization tools that
plays a key role in analyzing the intermediate pricing signals
generated by our algorithms. The other focused more on the discovery
phase, working on single-token and investment algorithms making
fantastic discoveries. Both individuals were instrumental in
supporting Gary throughout the initiative, helping to keep the work
exciting, engaging, and forward-moving. Gary is sincerely grateful
for their contributions and would like to extend his heartfelt
thanks for their dedication, enthusiasm, and collaboration.
-
This project represents an effort spanning approximately four
years, including around two years of full-time dedication from Gary.
What began as a fun and exploratory home project—without any
guaranteed success in achieving accurate time-series
predictions—has since evolved into a robust and mature
platform. As of today, the system comprises 14 live microservices
that collectively power both the investment platform as well as the
single-token and multi-token trading systems, marking a significant
and rewarding transformation over time.
-
More detailed information can be found on the project here or
accessed from the menu above.
Project IDP
-
Project IDP (Integrated Development Platform) is the active
successor to Project Discovery, continuing the work in the trading
and investment domain.
-
At its core, the project focuses on developing a platform tool that
provides end-to-end, turnkey solutions for building and deploying
trading and investment systems—all orchestrated from a single,
centralized interface. This approach streamlines not only the
underlying technology stack but also the associated development
processes.
-
The toolset includes both a graphical user interface and
distributed system components, designed to fully support its core
objectives outlined below:
-
A single core language was selected to meet all functional and
performance requirements, following a benchmarking process that
evaluated multiple language options prior to project
initiation.
-
All previous processes—data fetching, algorithm design,
discovery, and deployment—are to be fully integrated into
a unified workflow.
-
AI and LLM technologies are introduced through bespoke agentic
AI capabilities, enabling autonomous experimentation and
discovery of both new and existing algorithms—guided by a
human operator, quant, or engineer.
-
Many additional features are planned for future development,
including advanced charting and visualization simulations.
-
More detailed information can be found on the project here or
accessed from the menu above.
Cloud
-
For day-to-day software engineering consulting, cloud services from
providers such as AWS, GCP, and Azure have been utilized to varying
degrees based on project requirements.
-
Cloud-native development has also been conducted across both
on-premises and off-premises data centers, leveraging PaaS platforms
such as PCF and OpenShift.
-
While some in-house projects, such as those listed above, have
utilized Docker for certain services, the vast majority are set up
and deployed directly across various Linux environments.
-
Local applications utilize the screen utility for daemonizing services, rather than relying on systemd.
This approach presents a different set of trade-offs compared to
systemd, containerization, or Platform-as-a-Service (PaaS)
solutions. However, it aligns well with the specific prototyping
needs and constraints of those projects.
DevOps
-
Select DevOps functions have also been performed on various
contracts, including the development and maintenance of Ansible
scripts.
-
In-house projects, such as Project Discovery, have relied on a
combination of CI/CD pipelines using Jenkins, alongside Bash scripts
and remote SSH access, to manage deployments and execute discovery
jobs.
-
Naturally, one of the key objectives of Project IDP is to eliminate
the need for traditional CI/CD processes in both deployment and
discovery workflows.
-
From an operations perspective, the focus is primarily on hands-on
monitoring through log analysis, visualization tools, custom bespoke
health checks and telemetry visulization.
Artificial Intelligence
-
Artificial Intelligence is a broad field within computer science
that has recently gained widespread attention, particularly with the
rise of large language models (LLMs) built on artificial neural
networks. While Digilogue is actively leveraging these LLMs within
the Agentic AI space, this represents only one facet of its broader
use of AI technologies.
-
Evolutionary Computation—using techniques such as Genetic
Algorithms (GAs)—is particularly well-suited for tackling
NP-hard problems when they are framed as search problems. These
algorithms excel at exploring vast search spaces, often on the scale
of quintillions of possibilities, which are impractical to traverse
exhaustively.
-
One of the key motivations for adopting this approach over
alternatives like Recurrent Neural Networks (RNNs) lies in its
interpretability: the core time-series trading algorithms developed
in Projects Discovery and IDP can be reasoned about and crafted by a
human author. This allows for the injection of domain knowledge and
creativity before wrapping the logic in an evolutionary search
mechanism, such as a GA, to fine-tune its parameters or
structure.
For more individual contract information, please click the linkedin link
below:
Gary Black
Software Engineering Consultant
Digilogue Technologies Ltd.
Toronto, Ontario, Canada
📞 1(416) 931-3508
📧 gary.black@digilogue.ca
🌐
https://www.digilogue.ca
💼
https://www.linkedin.com/in/digilogue