MEMO: Salvage Solutions Inc. Technical Training Material
The following document contains approved networking protocols for multi-Associate operations. Please note that while simultaneous consciousness across multiple vessels creates certain synchronization challenges, these are considered normal operational parameters. Any perceived "lag" between Associates is a feature of quantum salvage mechanics, not a defect. Salvage Solutions Inc. reminds you that desynchronized Associates have a 73% higher career conclusion rate, which is why we invested in this infrastructure. You're welcome.
When I started building Deep Haul, I made a decision that shaped every system in the game: multiplayer wouldn't be bolted on later - it would be the foundation.
No "we'll add co-op in a future update." No "single-player first, then we'll figure out networking." From day one, every feature was designed with replication in mind. This meant more upfront complexity, but it also meant I'd never face the nightmare of retrofitting netcode into systems that assumed a single player.
Here's what I learned building co-op extraction horror in Unreal Engine 5, and the patterns that kept 2-4 players synchronized while running for their lives.
The Core Problem
Co-op horror has a unique networking challenge: precision matters. In a competitive shooter, a few frames of disagreement between clients gets resolved through lag compensation and hit registration. But in co-op horror, if Player 1 sees the door close and Player 2 sees it open, the entire experience breaks.
When you're carrying a dead teammate's body through a decompressing airlock, everyone needs to see the same physics. When a vacuum zone activates, all four players need to see corpses and loose objects get yeeted in the same direction at the same time. When the mission timer hits zero, everyone needs to experience that dread simultaneously.
This isn't about competitive fairness - it's about shared horror.
Pattern 1: Deferred Physics Impulse Replication
The first major pattern I developed solved a deceptively tricky problem: syncing one-shot physics impulses when the physics bodies aren't ready yet.
The use case: vacuum zones. When you drop a dead body into a decompressing corridor, it should launch dramatically toward the breach. All clients need to see the same trajectory, but physics initialization has timing quirks.
The Failed Approaches
I tried the obvious things first:
Multicast RPC: Called immediately when the body entered the volume. Result: clients received the impulse before their physics bodies were ready, so nothing happened.
Apply immediately after physics setup: Same problem. Calling RecreatePhysicsState() and SetSimulatePhysics(true) doesn't mean physics is ready that frame.
Replicated property with immediate clear: Set the impulse value, apply it locally, then clear the property. Replication batching meant clients often received a zero value instead of the impulse.
The Solution: Capture, Don't Clear
The working pattern uses a replicated property with OnRep, but with a critical twist: the server never clears the value. Let me walk through it.
// Header declaration
UPROPERTY(ReplicatedUsing = OnRep_PendingDropImpulse)
FVector PendingDropImpulse;
UFUNCTION()
void OnRep_PendingDropImpulse();
When the vacuum zone detects a corpse, it sets the impulse:
Character->SetPendingDropImpulse(CalculatedImpulse);
On the server side, we capture the value and defer application, but don't clear the property:
if (HasAuthority() && !PendingDropImpulse.IsNearlyZero())
{
// Capture NOW - don't rely on member variable in timer
FVector CapturedImpulse = PendingDropImpulse;
GetWorld()->GetTimerManager().SetTimerForNextTick([this, CapturedImpulse]()
{
if (!IsValid(this)) return;
if (USkeletalMeshComponent* MeshComp = GetMesh())
{
if (MeshComp->IsSimulatingPhysics())
{
MeshComp->AddImpulse(CapturedImpulse, NAME_None, true);
}
}
// DO NOT clear PendingDropImpulse here!
// It needs to replicate to clients first
});
}
On the client side, OnRep captures the value immediately and clears it locally:
void ADeepHaulCharacter::OnRep_PendingDropImpulse()
{
if (PendingDropImpulse.IsNearlyZero()) return;
// Capture the impulse NOW - server might clear it later
FVector CapturedImpulse = PendingDropImpulse;
// Clear locally so we don't apply again if OnRep fires multiple times
PendingDropImpulse = FVector::ZeroVector;
// Defer application until physics is ready
GetWorld()->GetTimerManager().SetTimerForNextTick([this, CapturedImpulse]()
{
if (!IsValid(this)) return;
if (USkeletalMeshComponent* MeshComp = GetMesh())
{
// Defensive - ensure physics is enabled
if (!MeshComp->IsSimulatingPhysics())
{
MeshComp->SetCollisionProfileName(TEXT("Ragdoll"));
MeshComp->RecreatePhysicsState();
MeshComp->SetSimulatePhysics(true);
}
MeshComp->AddImpulse(CapturedImpulse, NAME_None, true);
}
});
}
Key Principles
- Server sets the value once and leaves it - Let replication happen naturally
- Clients clear locally after capturing - Prevents race conditions
- Always capture values in lambda closures - Don't rely on member variables that might change
- Use
SetTimerForNextTickfor physics - Physics operations need a frame to take effect - Use
REPNOTIFY_OnChanged- Only fire OnRep when the value actually changes
This pattern ensures that by the time any client tries to apply the impulse, their physics bodies are ready. The result: synchronized ragdoll launches across all clients, every time.
Pattern 2: REPNOTIFY_Always for State Transitions
The second pattern solved body carrying. When you pick up a teammate's corpse, you need to:
- Attach it to your character
- Disable its physics
- Sync this state to all clients
- Handle both pickup AND drop
The trick is using REPNOTIFY_Always instead of the default REPNOTIFY_OnChanged:
UPROPERTY(ReplicatedUsing = OnRep_BodyCarrier, BlueprintReadOnly)
TObjectPtr<ADeepHaulCharacter> BodyCarrier;
// In GetLifetimeReplicatedProps:
DOREPLIFETIME_CONDITION_NOTIFY(
ADeepHaulCharacter,
BodyCarrier,
COND_None,
REPNOTIFY_Always // Key difference
);
Why REPNOTIFY_Always? Because when you transition between carriers (Player 1 drops, Player 2 picks up), the property might change from one non-null value to another. REPNOTIFY_OnChanged might not fire if the pointer addresses happen to match. REPNOTIFY_Always guarantees the OnRep fires on every replication.
The OnRep handler manages both states:
void ADeepHaulCharacter::OnRep_BodyCarrier()
{
if (BodyCarrier)
{
// Being carried - disable physics and attach
if (USkeletalMeshComponent* MeshComp = GetMesh())
{
MeshComp->SetSimulatePhysics(false);
MeshComp->AttachToComponent(
GetCapsuleComponent(),
FAttachmentTransformRules::SnapToTargetNotIncludingScale
);
// Set carrying position offset
}
}
else
{
// Dropped - re-enable ragdoll
if (USkeletalMeshComponent* MeshComp = GetMesh())
{
MeshComp->SetCollisionProfileName(TEXT("Ragdoll"));
MeshComp->RecreatePhysicsState();
MeshComp->SetSimulatePhysics(true);
MeshComp->WakeAllRigidBodies();
}
}
}
Clean, symmetric, and it handles the full state machine on both server and clients.
Pattern 3: Client Readiness System
The third pattern solved a more subtle problem: players spawning into missions before they're ready.
In early testing, clients would sometimes spawn mid-mission while still loading assets. They'd miss the docking sequence, spawn out of sync, and occasionally see physics objects in the wrong state. The fix was a readiness system inspired by lobby mechanics, but integrated into the gameplay flow.
The flow:
- Player connects and possesses their pawn
- Client calls
AcknowledgePossession()- a built-in callback that fires when possession is complete on the client side - Client sends
Server_ReportClientReady()RPC to game mode - Game mode tracks ready count in replicated game state
- HUD shows "Waiting for crew... X/Y ready"
- When all players ready, server begins docking sequence
Why AcknowledgePossession instead of just checking player count? Because a player can be "connected" but not yet fully loaded. The pawn might not be possessed yet, client-side setup might not be complete. The explicit ready report ensures the player is truly ready to play.
The game state exposes this to the HUD:
// Game State
UPROPERTY(ReplicatedUsing = OnRep_ReadyState)
int32 ReadyPlayerCount;
UPROPERTY(ReplicatedUsing = OnRep_ReadyState)
int32 ExpectedPlayerCount;
UPROPERTY(BlueprintAssignable)
FOnReadyStateChanged OnReadyStateChanged;
The HUD binds to the delegate and shows a simple loading panel until everyone's ready. Simple, but it eliminated all the "I spawned into chaos" bugs.
Pattern 4: Two-Tier Ability Cooldowns in GAS
Deep Haul uses Unreal's Gameplay Ability System for all player actions, including item use. The cooldown pattern needed to handle two requirements:
- Global cooldown - A short (0.5s) cooldown after using ANY item, preventing spam
- Per-item cooldown - Longer individual cooldowns for specific items (scanner is 5s, crowbar is 1s)
Both use the standard GAS pattern: Gameplay Effects that grant tags to the player's Ability System Component. While the tag is present, abilities with that tag in their ActivationBlockedTags can't activate.
The base item ability class registers the global cooldown tag:
UDeepHaulItemAbility::UDeepHaulItemAbility()
{
ActivationBlockedTags.AddTag(DeepHaulTags::Cooldown_ItemUse());
CooldownTagContainer.AddTag(DeepHaulTags::Cooldown_ItemUse());
}
Individual items add their own tag:
UGA_UseItem_Scanner::UGA_UseItem_Scanner()
{
ItemCooldownDuration = 5.0f; // 5 second scanner cooldown
ActivationBlockedTags.AddTag(DeepHaulTags::Cooldown_Item_Scanner());
CooldownTagContainer.AddTag(DeepHaulTags::Cooldown_Item_Scanner());
}
When the ability commits, it applies both cooldown effects:
void UDeepHaulItemAbility::ApplyCooldown(...) const
{
// Global cooldown (0.5s)
if (CooldownDuration > 0.0f)
{
FGameplayEffectSpecHandle GlobalSpec = MakeOutgoingGameplayEffectSpec(...);
GlobalSpec.Data->SetSetByCallerMagnitude(
Data_Cooldown(),
CooldownDuration
);
ApplyGameplayEffectSpecToOwner(..., GlobalSpec);
}
// Per-item cooldown (5s for scanner)
if (ItemCooldownDuration > 0.0f && ItemCooldownEffectClass)
{
FGameplayEffectSpecHandle ItemSpec = MakeOutgoingGameplayEffectSpec(...);
ItemSpec.Data->SetSetByCallerMagnitude(
Data_Cooldown(),
ItemCooldownDuration
);
ApplyGameplayEffectSpecToOwner(..., ItemSpec);
}
}
The timeline after using the scanner:
- 0.0-0.5s: Global cooldown active, ALL items blocked
- 0.5-5.0s: Only scanner cooldown active, other items available
- 5.0s+: Everything available
This is the standard GAS cooldown pattern, but the two-tier system gives us both spam prevention and item-specific pacing. And because it's all tag-based, it replicates automatically through GAS's built-in replication.
The Philosophy
Here's what building multiplayer-first taught me: every system that touches gameplay state needs to answer "who has authority?" before anything else.
Not "how does this work?" - that comes second. First question: who decides? Server? Client? Predicted on client, confirmed by server?
For Deep Haul, the answers are:
- Physics impulses: Server authoritative, replicated via captured properties
- Item pickup/drop: Server authoritative, clients predict visuals
- Ability activation: Server authoritative via GAS, clients predict cosmetics
- Mission state: Server authoritative via game mode, replicated through game state
- Player readiness: Client reports, server validates
This isn't about being dogmatic - it's about making decisions early so you don't paint yourself into corners later.
What's Next
These patterns handle the moment-to-moment synchronization, but Deep Haul has more complex networking challenges ahead:
- AI enemies that feel responsive on clients while remaining server-authoritative
- Environmental hazards that trigger on server but feel immediate to clients
- Extraction sequences where timing and positioning are critical
I'll cover those in future posts. For now, these four patterns have kept 2-4 players synchronized through hundreds of test missions, and they're all built on Unreal's standard replication system - no custom solutions, no magic.
Just authority, prediction, and careful thought about what needs to sync when.
S.A.L.L.I. NOTE: Salvage Solutions Inc. reminds Associates that synchronized team operations increase efficiency by 34%, which is why we invested in this networking infrastructure. The fact that desynchronized Associates have a 73% higher career conclusion rate is unrelated. We simply want you to succeed together.