Digilogue Technologies

Source Code

Innovative Software Engineering Solutions & Enterprise Consulting

Founded and led by seasoned software consultant Gary Black of Digilogue Technologies Ltd., based in Toronto, Ontario, Canada, specializes in microservices, distributed systems, AI-driven architecture, and cloud-native development.

With over 20 years of experience in engineering, high-performance computing and enterprise architecture, Gary delivers solutions that help businesses scale, optimize, and innovate — each crafted with precision and deep technical insight.

Software Engineering Technologies and Skillset

This section provides a deeper look into Digilogue’s core areas of expertise—offering clients a clearer understanding of the capabilities and services available. It can be thought of as an extended resume.

  1. Domains (Business/Technologies)
  2. Software Development and Tooling
  3. Infrastructure and Configuration
  4. Software Testing and Documentation
  5. Software Development Methodologies
  6. Architecture and Design Approach
  7. Project Discovery
  8. Project IDP
  9. Cloud
  10. DevOps
  11. Artificial Intelligence

Domains (Business/Technologies)

  1. Finance, commencial banking, payments, loans, deposits, credit cards, technologies and operations.
  2. In-house trading, investment and development platforms across multiple asset classes.
  3. Platform engineering and data platform engineering domains.
  4. Home office and policing including various backend systems integrations such as the police national computer database.
  5. Web application development for various CRM systems including print, recruitment and business networking.
  6. Electronic manufacturing services developing in-circuit and functional test solutions for manufactured product (including: large telecommunication backplanes, PC peripherals and server motherboards).

Software Development and Tooling

  1. Core programming languages: Java, Javascript, PHP, Python and R.
  2. Software development stack: back-end services (microservices APIs, schedulers, async messaging), front-end (JavaFX GUI, Web SPAs).
  3. Database development includes: mySQL (mariaDB), SQL Server, Oracle, Postgres, MongoDB and ElasticSearch non-relational/NoSQL solutions.
  4. Cooperating processes and concurrency through multi-threaded techniques.
  5. Frameworks including Spring Boot and EE Java.
  6. Service communication via HTTP, gRPC web sockets and low-level BSD sockets.
  7. Styles include JSON over REST, XML over SOAP and bespoke application protocols.
  8. Leetcode 75 and 150 challenges.

Infrastructure and Configuration

  1. TLS, DNS and DDNS configurations.
  2. Sub domain routing to end services via web service.
  3. Basic Authentication, Bearer Token with JWT, API/Session keys, OAuth 2.0, mTLS and cookies.
  4. Linux, Router and Apache Web configurations with virtual hosts.
  5. Traffic Mangement and telemetry configuration via Kialli (for Istio) and Vizceral + configuration with end services.
  6. Service mesh with Istio and various offerings with Consul and Zookeeper such as service discovery, app configuration KV store and health checks.
  7. Comprehensive health check capability for application health, DB state and NAS mounts via scripts, remote SSH, built-in bespoke app health checks and web APIs.
  8. Secure credential management store with HashiCorp Vault.
  9. API Gateway and PaaS configuration for services.
  10. Mail configuration for applications via SMTP and IMAP.
  11. Local on-prem data center assembly and configuration including: x14 Linux devices, x2 processing rigs, x2 DBMS servers, x2 web servers, power station, UPSs, NAS drives + associated networking and routing equipment.

Software Testing and Documentation

  1. Utilizing various test strategies by evaluating test pyramid on a project by project basis.
  2. Low level unit testing with x-unit libraries.
  3. Very occasional mocking of unit tests via Mockito.
  4. Service level integration tests driven by JUnit, often requiring infra/service dependencies. Can be integrated into pipeline or run locally.
  5. End to end integration tests driven by JUnit or driver, requiring infra/service dependencies. Can be integrated into pipeline or run locally.
  6. Manual exploritory testing. Sanity testing stratigies utilizing all methods.
  7. Performance testing via bespoke setup (piggying back on end-to-end solutions) or utilizing tools such as JMeter.
  8. Benchmark testing utilizing various techniques.
  9. JavaDocs used primarily for Java documentation.
  10. README.md markdown for git repos.
  11. Wiki such as confluence, bespoke wiki and wiki markup.
  12. DocBook for release notes + others.

Software Development Methodologies

  1. Project level: Scrum (utilizing scrum master principles for interacting with key stakeholders) and making efficient use of ceremonies, Kanban Lite, x2 pizza teams, wiki.
  2. Development level: XP, pair-programming (pair and split), code reviews, definition of done lists, ask anything anytime, full collaboration and socialising of key ideas.
  3. Ensure access and setup of all environments as needed (dev, test, pre-prod, prod etc.) to satisfy all development and testing requirements for successful delivery.

Architecture and Design Approach

  1. Two Key Styles of Functional Design Commonly Employed:
    1. Big Bang / Upfront Design
      This approach involves comprehensive, well-thought-out design from the outset. It is particularly useful for larger projects where a clear direction is needed early in the lifecycle, especially during inception.
    2. Iterative / Incremental Design:
      This approach involves short, iterative bursts of design and development. It is well-suited to projects with evolving or loosely defined requirements and aligns closely with Agile and Scrum methodologies.
  2. In practice, both approaches can be valuable and are often used in combination within a single project. The key lies in remaining adaptable and responsive to change, while gathering as much relevant information upfront as possible.
  3. Non-functional requirements are always given high priority. Performance, Logging, Security, Scalability, Redundancy, Resiliency, and Availability are at the forefront of consideration. In addition, Maintainability, Testability, Usability, Interoperability, Compliance, Observability, and Cost-efficiency are often essential factors depending on the system’s context and objectives.
  4. Digilogue's approach to System Design begins with thoughtful consideration—pondering ideas, identifying potential challenges, and evaluating trade-offs. This often involves sketching out architectural topologies, drafting system interaction diagrams, and mapping out data flow and component responsibilities. These early design artifacts serve as a foundation for deeper technical exploration.
  5. System design is not a solitary exercise; it thrives on collaboration. By inviting discussion through design reviews, technical huddles, and whiteboarding sessions, teams can surface blind spots, incorporate diverse expertise, and collectively arrive at well-reasoned solutions. Input from developers, architects, security specialists, operations engineers, and other technical stakeholders is not just welcome—it’s essential.
  6. Getting system design right is critical. It directly impacts scalability, maintainability, resilience, and performance. A well-designed system enables teams to move faster, adapt more easily, and handle growth with confidence. It is a highly technical, iterative, and strategic discipline that sets the tone for everything that follows in the development lifecycle.
  7. Common areas that impact the broader enterprise—such as coding standards, architectural styles, or design conventions—are always brought to the attention of the wider enterprise architecture group (where they make sense) for alignment and agreement. For example, having four different approaches to handling datetime across systems is something that can present challenges down the line.

Project Discovery

  1. Project Discovery is a home-grown initiative led by Gary and developed exclusively in-house at Digilogue. Its primary objective was—and continues to be—to generate gains in open spot markets, ultimately resulting in a suite of services designed for investment and trading.
  2. From a top-down perspective, the project can be viewed as comprising two distinct categories:
    1. Discovery – This phase focused on the inception, design, and exploration of new time-series algorithms. It leveraged a subset of artificial intelligence known as evolutionary computation, where problems are framed as single-objective search-based optimization tasks. This approach enabled the identification of optimal parameter sets for newly developed algorithms.
    2. Engineering – Once algorithms were discovered and fine-tuned, the remainder of the work transitioned into engineering. This included porting the finalized algorithms into service bot implementations and deploying them to target environments via CI/CD pipelines. The engineering phase also encompassed all supporting activities—from infrastructure setup and configuration to the assembly of hardware and networking within the local on-premises data center. It served as a broad catch-all for everything required to operationalize the system.
  3. The project also benefited from a valued part-time collaboration with two former colleagues. One contributed primarily to the engineering effort, developing graphing and visualization tools that plays a key role in analyzing the intermediate pricing signals generated by our algorithms. The other focused more on the discovery phase, working on single-token and investment algorithms making fantastic discoveries. Both individuals were instrumental in supporting Gary throughout the initiative, helping to keep the work exciting, engaging, and forward-moving. Gary is sincerely grateful for their contributions and would like to extend his heartfelt thanks for their dedication, enthusiasm, and collaboration.
  4. This project represents an effort spanning approximately four years, including around two years of full-time dedication from Gary. What began as a fun and exploratory home project—without any guaranteed success in achieving accurate time-series predictions—has since evolved into a robust and mature platform. As of today, the system comprises 14 live microservices that collectively power both the investment platform as well as the single-token and multi-token trading systems, marking a significant and rewarding transformation over time.
  5. More detailed information can be found on the project here or accessed from the menu above.

Project IDP

  1. Project IDP (Integrated Development Platform) is the active successor to Project Discovery, continuing the work in the trading and investment domain.
  2. At its core, the project focuses on developing a platform tool that provides end-to-end, turnkey solutions for building and deploying trading and investment systems—all orchestrated from a single, centralized interface. This approach streamlines not only the underlying technology stack but also the associated development processes.
  3. The toolset includes both a graphical user interface and distributed system components, designed to fully support its core objectives outlined below:
    1. A single core language was selected to meet all functional and performance requirements, following a benchmarking process that evaluated multiple language options prior to project initiation.
    2. All previous processes—data fetching, algorithm design, discovery, and deployment—are to be fully integrated into a unified workflow.
    3. AI and LLM technologies are introduced through bespoke agentic AI capabilities, enabling autonomous experimentation and discovery of both new and existing algorithms—guided by a human operator, quant, or engineer.
    4. Many additional features are planned for future development, including advanced charting and visualization simulations.
    5. More detailed information can be found on the project here or accessed from the menu above.

Cloud

  1. For day-to-day software engineering consulting, cloud services from providers such as AWS, GCP, and Azure have been utilized to varying degrees based on project requirements.
  2. Cloud-native development has also been conducted across both on-premises and off-premises data centers, leveraging PaaS platforms such as PCF and OpenShift.
  3. While some in-house projects, such as those listed above, have utilized Docker for certain services, the vast majority are set up and deployed directly across various Linux environments.
  4. Local applications utilize the screen utility for daemonizing services, rather than relying on systemd. This approach presents a different set of trade-offs compared to systemd, containerization, or Platform-as-a-Service (PaaS) solutions. However, it aligns well with the specific prototyping needs and constraints of those projects.

DevOps

  1. Select DevOps functions have also been performed on various contracts, including the development and maintenance of Ansible scripts.
  2. In-house projects, such as Project Discovery, have relied on a combination of CI/CD pipelines using Jenkins, alongside Bash scripts and remote SSH access, to manage deployments and execute discovery jobs.
  3. Naturally, one of the key objectives of Project IDP is to eliminate the need for traditional CI/CD processes in both deployment and discovery workflows.
  4. From an operations perspective, the focus is primarily on hands-on monitoring through log analysis, visualization tools, custom bespoke health checks and telemetry visulization.

Artificial Intelligence

  1. Artificial Intelligence is a broad field within computer science that has recently gained widespread attention, particularly with the rise of large language models (LLMs) built on artificial neural networks. While Digilogue is actively leveraging these LLMs within the Agentic AI space, this represents only one facet of its broader use of AI technologies.
  2. Evolutionary Computation—using techniques such as Genetic Algorithms (GAs)—is particularly well-suited for tackling NP-hard problems when they are framed as search problems. These algorithms excel at exploring vast search spaces, often on the scale of quintillions of possibilities, which are impractical to traverse exhaustively.
  3. One of the key motivations for adopting this approach over alternatives like Recurrent Neural Networks (RNNs) lies in its interpretability: the core time-series trading algorithms developed in Projects Discovery and IDP can be reasoned about and crafted by a human author. This allows for the injection of domain knowledge and creativity before wrapping the logic in an evolutionary search mechanism, such as a GA, to fine-tune its parameters or structure.
For more individual contract information, please click the linkedin link below:


Gary Black
Software Engineering Consultant
Digilogue Technologies Ltd.
Toronto, Ontario, Canada

📞 1(416) 931-3508
📧 gary.black@digilogue.ca
🌐 https://www.digilogue.ca
💼 https://www.linkedin.com/in/digilogue