Hi everyone,

In a project involving Firebase and object types like Tickets, Schedules, and Timers, I want to structure my classes such that switching databases (potentially to MySQL) wouldn’t require a complete rewrite.

Approach 1:

  • A DatabaseProxy interface with generic methods (e.g., createTicket, createTimer, etc.)
  • A FirebaseProxy class implementing the interface, with methods for each object type (e.g., createTicket, createTimer, etc.)
  • Manager classes for Tickets, Schedules, and Timers, that primarily use the FirebaseProxy for operations. This provides flexibility for processing input/output, but most of the time the manager classes will just be calling methods on the Proxy directly.

Approach 2:

  • A DatabaseProxy interface with the most basic CRUD methods (create, read, update, delete).
  • A FirebaseProxy class implementing the interface.
  • Manager classes for Tickets, Schedules, and Timers, calling FirebaseProxy with parameters like update(collection, ticket) and implementing createTimer, createTicket, etc.

I like the second approach in theory, but what I’m worried about is whether the separation is too low level. What happens if the database I switch to changes schema such that taking in an object and a collection name isn’t good enough anymore? For example, will there be concerns if I switch between Vector, NoSQL, and SQL?

Any opinions are appreciated!

  • o_p@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Sounds like the repository pattern would help here.

    I’m doing something similar now where I need to store objects “somewhere”. I have a low level Repository interface to handle persistence that can do the basic CRUD (mainly get/set for my use case). It’s primarily backed by redis, but that same interface has been backed by Postgres, vault, and in-memory caches depending on the need/environment. Works amazingly well.

    As a bonus we can create a new Repository to migrate data when needed - such as a redis or postgres upgrade, we build a MigratingRedisRepository that takes in 2 RedisRepository and does the necessary logic of reading from the old and writing to the new.

    I think you’re on the right track with a mix of 1&2. Abstract out the data store, it will change some time - and you’ll want to control it for tests too. Let services/managers handle state and delegate down for persistence to wherever that may be.