Envy ~ Distributed Habbo Retros

Leader

github.com/habbo-hotel
Aug 24, 2012
1,007
267
Envy

Introduction:
In the past, I have worked on numerous projects with the goal of creating a suite of libraries for rapidly developing Habbo applications. During this time, I have gained invaluable knowledge and started to consider the future of retro architecture and how to solve past issues.

Retros were initially developed with the simple goal of reverse-engineering Habbo. Developers didn't consider the implications of what they were building, nor did they think about how these projects would scale over the long term. As a result, hundreds of emulators have been created over the years, with only a few dozen gaining significant community traction. Ultimately, all of these emulators were plagued by the same underlying issue: monolithic architecture. Retros were meant to emulate MMOs, which can support thousands if not millions of concurrent players. While these numbers are mostly unattainable now, the consequences of a monolithic architecture will always become apparent. Our community has been unable to expand emulators much, and we often make decisions based on factors like the revision, CMS, HTML5, app or SWF. If an emulator were built correctly, these things wouldn't matter.

The answer is distributed architecture based on microservices. Paired with TypeScript, you get a stack that developers can maintain and only need to know one language for. Types can be shared, code can be shared, and you'll have more flexibility to do what you really want with the project.

Project Goals:
  • Provide a distributed architecture that can scale to millions of users
  • Provide common types and libraries for the FE and BE
  • Provide an external GraphQL API for CMS implementations over HTTP2
  • Provide an external WebSocket API for CMS and Plugin implementations to receive real-time updates
  • Provide an internal NATS API for microservice communication
  • Ensure all code follows true separation of concerns
  • Ensure the database can be distributed
Scope Goals:
  • Provide the minimum necessary to start creating your implementation for your preferred revision and connection type (e.g., Nitro, Flashplayer, Shockwave)
  • Accomplish the above without making opinionated decisions on how the end-user connects or interacts
  • Scale Up will provide its events and API calls that you can implement into your business logic associated with a packet structure or WebSocket events
  • Provide an example using Nitro

Tech Stack:
NodeJS - TypeScript - NestJS - React - Postgres - HTTP2
NATS - REDIS - Bull - Vite - Turbopack - NX - PM2

What This Project Is Not:
This project is not a typical emulator. The goal is not to reproduce the exact events or structures 1-1 but instead to create everything from scratch with my own API and events.

This project is not required to run distributed. Internal testing will be done on a monorepo with PM2 managing the processes and with a single database with separate schemas.

However, if you wish to scale this, you can do so by putting the independent services on their server(s) with their respective databases and processing power. AWS is typically the best for running distributed environments that scale to demand.

This Project Sounds Over-Engineered:
It is. Everything I do is over-engineered because it's fun and the right solution if you look far enough ahead.

Source Code:
This project will not be released for the foreseeable future. Updates will be shared on this thread. This project will be released when it's stable and iterative.

 
Last edited:

webbanditten

New Member
Jun 8, 2013
26
10
Hey @Leader,

I'm amazed to see some innovation here on this forum. A step toward distributed architecture could be a real game-changer. A few quick thoughts:

1. Skill Transition: A guideline for developers to smoothly shift their skills from monolithic to microservices would be great. We're all ready to learn and adapt!

2. Testing & Debugging: Could you share any plans for testing and debugging tools? These will be vital in a distributed environment.

3. Inter-service Communication: NATS API for microservice communication sounds interesting. It'd be great to hear how potential network latency and data consistency issues would be managed.

4. Database Management: How does the project plan to handle distributed database management challenges, like transactions and network failures?

5. Infrastructure Cost: Any suggestions for managing costs in cloud environments like AWS? I would love this to be accessible to all!

6. Security: Security is crucial with multiple microservices. Excited to learn more about the integrated security measures.

7. Project Release: I understand your approach to release stable code, but opening up early for community contribution could speed up the process and bring in fresh ideas.
 

Leader

github.com/habbo-hotel
Aug 24, 2012
1,007
267
Hey @Leader,

I'm amazed to see some innovation here on this forum. A step toward distributed architecture could be a real game-changer. A few quick thoughts:

1. Skill Transition: A guideline for developers to smoothly shift their skills from monolithic to microservices would be great. We're all ready to learn and adapt!

2. Testing & Debugging: Could you share any plans for testing and debugging tools? These will be vital in a distributed environment.

3. Inter-service Communication: NATS API for microservice communication sounds interesting. It'd be great to hear how potential network latency and data consistency issues would be managed.

4. Database Management: How does the project plan to handle distributed database management challenges, like transactions and network failures?

5. Infrastructure Cost: Any suggestions for managing costs in cloud environments like AWS? I would love this to be accessible to all!

6. Security: Security is crucial with multiple microservices. Excited to learn more about the integrated security measures.

7. Project Release: I understand your approach to release stable code, but opening up early for community contribution could speed up the process and bring in fresh ideas.
1. Skill Transition: There shouldn't be too much of a difficulty assuming the developers are competent and can follow documentation or understand libraries. Every service I create will expose an internal API with a client lib for other services to use when connecting to it. This allows their interaction to remain type protected and type documented so it's easy for developers to know what they can interact with

2. Testing & Debugging: Tools may be developed as I develop this out. My primary goal for local development is using pm2 for running all apps/services locally and restarting as needed. Long term, I hope to have hot reload capability on the backend services. The frontend is already hot reload capable including library integrations.

3. Inter-service Communication: Assuming the infrastructure is correct in production and all services are on a private network with exposure only where necessary and behind a proxy or gateway - latency should be fairly low and you can setup services to be geolocated in addition. NATS has queue groups allowing for events to be processed once and by a single subscriber. I may also add Kafka for the actual event streams. NATS is primarily for request->response on the internal communication. Locally, I haven't had any issues with latency and it performs similar to monoliths even with 3+ service calls chained

4. Database Management: Every service is directly responsible for it's own data and will use ids for interacting with other services. With this approach, services are simpler and easier to manage. Their databases don't reference external properties aside from a name and key. Feature services will be used for situations where multiple services are chained together and will be responsible for managing the data and ensuring it's success at each step. Databases are setup to be completely independent of one another and have no connections to each other. I use Postgres as well which provides

5. . Infrastructure Cost: Hotels can run this on a single server or hundreds of servers. Hotels would scale their cost to their demand but overall it should use lower resources than a typical setup by being able to fine tune where demand comes from.

6. Security: Services have multiple layers of APIs. Internal APIs should never be exposed on a port and are expected to be ran behind an internal network only. There is no inherit security outside the internal network, although I may add authentication keys for internal APIs in the future. External APIs have full authentication and authorization protections and can be scoped to the feature level. Currently, I use JWT bearer tokens for external APIs and have additional checks by the session service.

7. Project Release: If people could contribute to this project, they would have made their own version. The earliest time at which the source would be shared is when I believe it's capable of reproducing the core retro features.

I haven't worked on this much in the past few weeks - but the core of the project has been built out including everything I described above outside the retro implementation.
 
Last edited:

Leader

github.com/habbo-hotel
Aug 24, 2012
1,007
267
I got busy with other projects but recently started working on this again, it's going pretty well and I have a reliable architecture and foundation to safely build features on top of.

Working on it with the intent to use it on my own roleplay, but not sure how tangible that will be in the near future as I'm just now starting on handshake ;)

Commits are being tracked on the server now and I plan on sharing weekly updates
Post automatically merged:

Event handling is fully handled internally and packets will be communicated back internally via NATS to the websocket gateway where they're sent to the appropriate users

I also took some time to break out the packets into a more maintainable library
 
Last edited:

LeChris

github.com/habbo-hotel
Sep 30, 2013
2,744
1,326
The source code is now public

 
Last edited:

Users who are viewing this thread

Top