Week three of MiseOS, and Hibernate stopped being a tutorial — and started being architecture.
Over the last two weeks, I learned Hibernate through isolated exercises. This week, I finally put those skills into practice by translating a real domain model into a working persistence layer.
What seemed like straightforward persistence work quickly turned into real design tradeoffs:
Should Station be an enum or a database table?
How much validation belongs in a DAO?
And why did three out of four menu slots silently disappear? What I built:
- A complete ERD for the system
- 9 JPA entities with relationships
- 7 DAO implementations
- A
DBValidatorutility for defensive programming - Integration tests for all DAOs using Testcontainers and real PostgreSQL
What I learned: Design decisions are harder than writing code.
The ERD: Drawing before coding#
Last week I had conceptual domain models. This week I turned those into a concrete database design.
I drew the ERD with actual fields and data types, not just abstract entity boxes. I included:
- Primary keys
- Foreign keys
- Data types
- Audit fields
Why include this level of detail? Because it made the JPA entity translation trivial. When I sat down to write DishSuggestion.java, I didn’t have to make any design decisions — I just mapped the diagram to annotations. It also forced me to think through the relationships and constraints upfront, which saved a lot of refactoring later.
The design decisions#
Translating the ERD into working code meant facing a series of design choices. Some were straightforward. Others were more complex.
I faced dozens of decisions this week, but here are the ones that mattered most—the ones that shaped the entire persistence layer and will impact how I build the rest of the system.
Decision 1: Station — Enum or Entity?#
The dilemma: Should Station be a hardcoded Java enum or a database table that can be modified at runtime?
// Option A: Enum (compile-time safety)
public enum StationType {
HOT, COLD, VEGETARIAN, STARTER, BAKERY
}
// Option B: Entity (runtime flexibility)
@Entity
public class Station {
private Long id;
private String stationName;
private String description;
}I chose Option B: Entity.
Why? Real kitchens vary. A hotel kitchen might have 7 stations. A small bistro might only have 2. If Station is an enum, adding a new station requires:
- Editing Java code
- Recompiling
- Redeploying
That’s ridiculous for reference data. If I hardcode stations, I’m building software for MY vision of a kitchen, not the actual kitchens that will use this.
The tradeoff: I lose type safety. Someone could accidentally create “VARM” and “VARMT” as duplicates.
Decision 2: Ingredient normalization — To normalize or not?#
Last week I asked: “Can a Line Cook write ’løg’, ‘onions’, or ‘Onion’ — or do I force a dropdown?”
I chose: Keep ingredient names as free text strings.
No ingredients table. No foreign keys. Just:
@Setter
@Column(name = "name", nullable = false)
private String name;Why? Creating a normalized ingredients catalog would mean:
- Pre-populating hundreds of ingredients
- Handling edge cases (organic vs regular, different suppliers)
- Building autocomplete UI
- Dealing with variations the system doesn’t know about
That’s weeks of work for questionable value.
The pragmatic solution: The Head Chef manually reviews ingredient requests and creates shopping list items. If 3 cooks request “løg”, “onions”, and “Onion”, the Head Chef sees all three and creates one aggregated item “Onions — 15kg”.
This is their job anyway. They need to review quantities, check what’s in storage, and decide on suppliers. The system supports this workflow instead of trying to automate what requires human judgment.
If I change my mind later? I can add normalization without breaking anything. Start with simple, add complexity when it’s actually needed.
Decision 3: Interface segregation — How clean is too clean?#
I spent a surprising amount of time thinking about DAO design. Should I follow Interface Segregation Principle strictly and create separate interfaces for each operation?
// Option A: Split interfaces (ISP purist)
public interface ICreateDAO<T> { T create(T entity); }
public interface IReadDAO<T> { Optional<T> getById(Long id); }
public interface IUpdateDAO<T> { T update(T entity); }
public interface IDeleteDAO<T> { void delete(Long id); }
// Then services inject exactly what they need:
public class ReportService {
private final IReadDAO<User> userReader; // Only read, can't modify
}
// Option B: One interface per entity with extending a generic crud interface (pragmatic)
public interface IEntityDAO<T, I>{
T create(T t);
Set<T> getAll();
T getByID(I id);
T update(T t);
boolean delete(I id);
}
public interface IUserDAO extends IEntityDAO<User, Long>
{
Optional<User> findByEmail(String email);
Set<User> findByRole(UserRole role);
boolean existsByEmail(String email);
}I chose Option B.
Why? Because in practice, my services need multiple operations:
public class DishSuggestionService {
private final IDishSuggestionDAO dishDAO;
public DishSuggestion submitDish(...) {
// Needs: create() AND findByStation() AND getById()
}
}If I split interfaces, I’d inject 3-4 interfaces per service. That’s not cleaner, it’s just more verbose.
The reality: I’m one developer building a 10-week school project, not a team of 50 building microservices. I need appropriate abstractions, not maximum abstractions.
What I DO have:
- Services depend on interfaces (testable via mocking)
- Clear contracts (DAO methods are well-named)
- Separation of concerns (persistence logic stays in DAOs)
That’s enough. If I later need a read-only export service, I can extract IReadDAO<T> then. YAGNI applies.
Decision 4: Abstract BaseDAO — DRY vs Clarity#
Every DAO has nearly identical CRUD implementations:
// UserDAO
public User create(User user) {
try (EntityManager em = emf.createEntityManager()) {
em.getTransaction().begin();
em.persist(user);
em.getTransaction().commit();
return user;
}
}
// DishSuggestionDAO
public DishSuggestion create(DishSuggestion dish) {
try (EntityManager em = emf.createEntityManager()) {
em.getTransaction().begin();
em.persist(dish);
em.getTransaction().commit();
return dish;
}
}
// ... 7 more DAOs with the same codeObvious DRY violation. I could create an AbstractBaseDAO<T>:
public abstract class AbstractBaseDAO<T> {
protected final EntityManagerFactory emf;
private final Class<T> entityClass;
public T create(T entity) {
try (EntityManager em = emf.createEntityManager()) {
em.getTransaction().begin();
em.persist(entity);
em.getTransaction().commit();
return entity;
}
}
// ... same for update, delete, getById, getAll
}
// Then:
public class UserDAO extends AbstractBaseDAO<User> implements IUserDAO {
// Only implement custom queries
}I chose NOT to do this. Yet.
Typing out similar CRUD code multiple times helped me internalize Hibernate’s lifecycle. An early abstraction would have hidden that learning behind inheritance.
More importantly, requirements may diverge. A ShoppingList.create() might eventually validate delivery dates. A WeeklyMenu.delete() might become a soft delete. Premature abstraction would make those changes harder, not easier.
If the number of entities grows or transaction handling becomes more complex, I’ll refactor. Until then, clarity beats DRY.
Decision 5: Cascade operations - When should children die with parents?#
Deciding when to use CascadeType.ALL and orphanRemoval = true was tricky.
A useful mental model was this question: “If the parent disappears, should the child still exist in the real world?”
My rules became:
- Parent owns children → Use cascade + orphanRemoval
- Parent references children → No cascade
Examples:
// WeeklyMenu OWNS its slots
@OneToMany(mappedBy = "weeklyMenu",
cascade = CascadeType.ALL, // Save/update/delete together
orphanRemoval = true) // Delete slot if removed from menu
private Set<WeeklyMenuSlot> weeklyMenuSlots;
// DishSuggestion REFERENCES allergens (shared across dishes)
@ManyToMany
@JoinTable(name = "dish_allergen", ...)
private Set<Allergen> allergens; // No cascade - allergens are sharedWhy it matters:
If I had put cascade = CascadeType.ALL on DishSuggestion.allergens:
- Delete a dish
- Hibernate deletes the allergens too
- Now “GLUTEN” is gone from the database
- Every other dish that had gluten breaks
The test: Ask yourself: “If I delete the parent, should the child cease to exist in the real world?”
- Menu deleted → Menu slots should be deleted
- Dish deleted → Allergens still exist for other dishes
Defensive programming: How much should DAOs validate?#
I decided that each layer validates its own inputs — DAOs don’t trust services, services won’t trust controllers. Layers are decoupled and shouldn’t blindly trust data from above. To avoid duplicating validation logic across 7 DAOs, I created a DBValidator utility for basic checks with generic methods:
public class DBValidator {
public static void validateId(Long id) {
if (id == null || id <= 0) {
throw new IllegalArgumentException("Invalid ID: Must be provided and greater than 0.");
}
}
public static <T> T validateExists(T entity, Object id, Class<T> entityClass) {
if (entity == null) {
String className = entityClass.getSimpleName();
throw new EntityNotFoundException(className + " with ID " + id + " was not found.");
}
return entity;
}
public static void validateNotNull(Object obj, String entityName) {
if (obj == null) {
throw new IllegalArgumentException(entityName + " cannot be null.");
}
}
public static void validateRange(int number, int min, int max, String fieldName) {
if (number < min || number > max) {
throw new IllegalArgumentException(
String.format("%s must be between %d and %d, got: %d",
fieldName, min, max, number)
);
}
}
}Usage in DAOs:
@Override
public WeeklyMenu getById(Long id) {
DBValidator.validateId(id); // Check before query
try (EntityManager em = emf.createEntityManager()) {
WeeklyMenu menu = em.find(WeeklyMenu.class, id);
return DBValidator.validateExists(menu, id, WeeklyMenu.class);
}
}Why validate here?
- Fail fast: Catch invalid IDs before hitting the database
- Consistent errors: All DAOs throw the same exceptions for the same problems
Alternative I considered: Let everything bubble up and catch at controller level. But then:
NullPointerExceptioninstead ofIllegalArgumentExceptionwith clear messageNo consistent error format
Harder to debug
I am still considering adding a
DatabaseExceptionto wrap all persistence errors, but for now, this is sufficient.
The IEntity Interface: Why every entity implements It#
All my entities implement a simple interface:
public interface IEntity {
Long getId();
}It enables generic DAO methods:
public interface IEntityDAO<T extends IEntity> {
T create(T entity);
Optional<T> getById(Long id);
// ...
}Why? This enabled a surprisingly powerful testing pattern. In my TestPopulator, I can seed Users, Stations, and Dishes once and reuse the same instances across assertions.
By having everything implement IEntity, I can store all my seeded test data in a single Map:
private Map<String, IEntity> seeded = new HashMap<>();
// Later in the populator:
seeded.put("station_cold", stationCold);
seeded.put("user_gordon", chefGordon);
// In my tests:
User gordon = (User) seeded.get("user_gordon");
It keeps the test setup incredibly clean and strongly typed where it matters.The tradeoff: Every entity MUST have a Long id. But that’s true anyway for my use case.
Benefit: Type safety and better testability. The compiler prevents me from passing non-entity objects to DAOs.
Integration Testing: Bridging the gap between code and database#
I made a strict rule for myself: No DAO is considered “done” until it has a passing integration test. In JPA, your code is essentially a set of instructions for how Java objects should map to database rows. If you only test your Java logic, you’re only testing half the bridge. You can write perfectly clean Java code that compiles, but if your @Table mapping is off, your @Column names are misspelled, or your @JoinColumn points to a non-existent key, the system will fail the moment it hits the database.
Testcontainers: Ephemeral Databases and Production Parity To truly prove my DAOs work, I needed to test them against a real database. However, testing against a shared development database is a recipe for “flaky” tests—where one test fails because another test left behind dirty data.
I chose Testcontainers to spin up an ephemeral, isolated PostgreSQL Docker container exclusively for the test suite. This ensures that every time I run my tests, I am starting with a 100% pristine environment.
public class HibernateTestConfig {
private static Properties buildProps() {
Properties props = HibernateBaseProperties.createBase();
// Testcontainers intercepts the JDBC connection
props.put("hibernate.connection.driver_class",
"org.testcontainers.jdbc.ContainerDatabaseDriver");
props.put("hibernate.connection.url",
"jdbc:tc:postgresql:16.2:///test_db");
// Rebuild the schema from scratch for every test run
props.put("hibernate.hbm2ddl.auto", "create-drop");
return props;
}
}Why this approach is superior:
Production Parity: I am testing against the exact version of PostgreSQL (16.2) that I intend to use in production. This catches subtle bugs like enum mapping issues or PostgreSQL-specific syntax errors that simpler in-memory alternatives might miss.
Catching Mapping Errors: By running against a real instance, I get immediate feedback if my CascadeType.ALL is actually working or if a NOT NULL constraint is being violated.
Zero Cleanup: Because the container is destroyed after the tests finish, I never have to worry about cleaning up “test junk” in my development database.
The TestPopulator: Reusable Test Data#
Testing a query like findByWeekAndYear(int week, int year) requires a deeply nested object graph:
WeeklyMenu→WeeklyMenuSlot→DishSuggestion→User+Station
Writing this setup in every test would be insane. So I built a TestPopulator:
public class TestPopulator {
private final Map seeded = new HashMap<>();
public void populate() {
populateStations(); // Create 4 stations
populateUsers(); // Create 4 users (1 head chef, 3 line cooks)
populateDishSuggestions(); // Create 5 dishes
populateWeeklyMenus(); // Create menu with slots
}
public Map getSeededData() {
return seeded;
}
}The magic: Store everything in Map<String, IEntity> so tests can retrieve by key:
@BeforeEach
void setUp() {
TestCleanDB.truncateTables(emf); // Wipe database
TestPopulator populator = new TestPopulator(emf);
populator.populate();
seeded = populator.getSeededData();
}
@Test
@DisplayName("Update - should update suggestion status to approved")
void updateWithHeadChef() {
// Arrange: Pull seeded data from map
DishSuggestion seed = (DishSuggestion) seeded.get("dish_salmon");
User headChef = (User) seeded.get("user_gordon");
// Act
seed.approve(headChef);
DishSuggestion updated = dishSuggestionDAO.update(seed);
// Assert
assertThat(updated.getDishStatus(), is(Status.APPROVED));
assertThat(updated.getReviewedBy(), is(headChef));
assertThat(updated.getReviewedAt(), is(notNullValue()));
}Why IEntity interface matters here: Without it, I’d need separate maps for each type (Map<String, User>, Map<String, Station>, etc.). With IEntity, one map holds everything.
Where testing revealed a silent data loss bug#
Here’s the test that exposed the silent data loss bug:
@Test
@DisplayName("Create - should cascade save all menu slots")
void createMenuWithSlots() {
// Arrange
Station cold = (Station) seeded.get("station_cold");
DishSuggestion salmon = (DishSuggestion) seeded.get("dish_salmon");
DishSuggestion steak = (DishSuggestion) seeded.get("dish_steak");
WeeklyMenu menu = new WeeklyMenu(10, 2025);
menu.addMenuSlot(new WeeklyMenuSlot(MONDAY, salmon, cold));
menu.addMenuSlot(new WeeklyMenuSlot(TUESDAY, steak, cold));
menu.addMenuSlot(new WeeklyMenuSlot(WEDNESDAY, null, cold));
menu.addMenuSlot(new WeeklyMenuSlot(THURSDAY, salmon, cold));
// Act
menuDAO.create(menu);
// Assert
WeeklyMenu fetched = menuDAO.getByIdWithSlots(menu.getId());
assertThat(fetched.getWeeklyMenuSlots(), hasSize(4)); //FAILED: size = 1
}Expected: 4 slots
Actual: 1 slot
No exception thrown. No error message. Just silent data loss.
I was warned that the equals and hashcode were problematic in hibernate and i realized: the problem happened before Hibernate even saw the data.
The 4 WeeklyMenuSlot objects never made it into the HashSet. They were being silently discarded because:
// My original equals/hashCode (BROKEN)
@Override
public int hashCode() {
return Objects.hashCode(id); // id is null for new objects!
}
// All 4 slots had null IDs
// Objects.hashCode(null) = 0 for all 4
// HashSet thought: "They all hash to 0 and equal each other → duplicates!"
// Only kept 1The fix that made the test pass:
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof WeeklyMenuSlot)) return false;
WeeklyMenuSlot other = (WeeklyMenuSlot) o;
return id != null && id.equals(other.id);
}
@Override
public int hashCode() {
return getClass().hashCode();
}Re-ran the test. Green. All 4 slots persisted correctly.
What Worked Well This Week#
Domain-Driven Design in entities: Putting business logic directly in DishSuggestion.approve(User headChef) feels right. The entity enforces its own rules.
Station as entity: Already proved valuable. I can demo the app with different kitchen configurations without touching code.
TestPopulator pattern: Massive time saver. Write seed logic once, use in 50+ tests
Integration testing philosophy: Proved the persistence layer actually works before building on top of it
equals/hashCode fix: Applied the constant hashCode pattern to ALL entities upfront after learning the hard way
DBValidator: Centralized validation prevented duplicating null checks everywhere
What Was Difficult#
The equals/hashCode trap: An error hard to discover and debug. Silent bugs are the worst.
Decision fatigue: Every design choice had 5 options. Enum vs entity. Abstract DAO vs duplication. Validate in DAO vs service. I spent as much time reading blogs as writing code.
Bidirectional relationships: Even with helper methods, I still had to remember which side is the “owner” of the relationship for cascading to work. It’s easy to mess up.
Knowing when to stop: I could spend weeks perfecting the DAO layer. At some point I had to say “good enough” and move on. I will refactor later if needed, but I can’t let perfect be the enemy of done.
What’s Next#
Next week: Service layer
- Start implementing service layer for business workflows
- Build DTOs to decouple API from entities
- Create HTTP client to receive external api for translating dish names (Danish ↔ English)
- Add exception handling and validation
Questions I’m thinking about:
- Should i make a DatabaseException for wrapping all persistence errors?
- Where do transaction boundaries belong?
- How do I handle translation (Danish ↔ English) in DTOs?
- How do insure validation is consistent across layers without duplication?
This is part 3 of my MiseOS development log. Follow along as I build a tool for professional kitchens, one commit at a time
