API v2 Migration Guide
Introducing OfficeRnD API v2
OfficeRnD API v2 is the next generation of our platform's API. It is a complete redesign and upgrade of the OfficeRnD API that significantly improves performance, scalability, and security. In API v2, we've rebuilt the foundation using modern technology and best practices to ensure the API can serve your applications faster and more reliably.
This new version is a breaking change from API v1, but it lays a future-proof groundwork so that moving forward, both your integration and our platform can grow with stability. The major breaking changes include:
- Introduced a new structured pagination
- Independent read/write tracking
- Added new rate limits to avoid throttling
- Introduced cursor-based pagination
- Added new filtering capabilities
- Adopted OAuth 2.0 and granular security scopes
- Deprecation of the
$populate
query parameter
Why a new API version?
Simply put, API v1 started to show its limits as our customer base and data grew. We developed API v2 to address those challenges head-on. The new version is engineered for high performance, quicker response times, handling larger workloads, and robust scalability to support many more organizations and requests in parallel without hiccups.
Equally important, API v2 introduces enhanced security measures to protect your data. Adopting modern standards (like OAuth 2.0) and more structured request handling has made the platform more secure against unauthorized access and misuse.
In short, API v2 aims to provide a faster, more scalable, and more secure integration experience that will serve you well in the long run.
How is API v2 implemented?
Under the hood, API v2 is powered by NestJS, a cutting-edge Node.js framework known for building efficient and scalable server-side applications. This modern framework provides a solid architectural backbone (inspired by Angular's modular design) that makes our new API highly maintainable and extensible. NestJS's architecture allows our engineering team to deliver a system that is modular, testable, and optimized for speed and reliability.
We leverage DTOs (Data Transfer Objects) throughout the API, meaning every request and response adheres to a well-defined schema. This ensures consistent data structures and built-in validation of inputs, reducing errors and eliminating the ambiguity that may have existed in API v1.
The result for you as a developer is an API that behaves more predictably – you know exactly what structure to send and expect in return, which makes integration easier and less error-prone.
Key improvements in API v2
-
Structured Pagination: All list endpoints now use a uniform, cursor-based pagination scheme. You'll use a
$limit
parameter to control page size and receive$cursorNext
tokens for fetching subsequent pages. This standardized approach makes it easy to navigate large data sets and ensures you only fetch what you need (improving performance for both client and server). No more ad-hoc pagination; API v2's paging is consistent across resources and includes helpful metadata in headers about total results and navigation. -
Entity-Scoped Writes: We've designed API v2 so that every write operation (creating, updating, or deleting data) is explicitly scoped to a specific entity (such as an organization or workspace). In practice, this means API endpoints for modifications require an identifier in the path (for example, a specific organization ID or slug) to ensure you're operating in the correct context. This scoping adds an extra layer of safety in multi-tenant scenarios, preventing accidental changes outside your intended scope, and makes integrations clearer and safer. You always know exactly which entity you're affecting with a given call, aligning with the principle of least privilege in data management.
-
OAuth 2.0 with Granular Scopes: API v2 adopts the OAuth 2.0 standard for authentication, replacing the old token mechanism with a more secure and flexible system. When your app requests an access token, it can specify scopes – fine-grained permissions that limit what the token can do. For example, you might request a token that can only read member data but not modify it. The authorization server will issue a token with exactly the scopes you ask for (and nothing more), and the API will enforce those scopes on every call.
This means improved security for your integration: even if a token is compromised, its powers are restricted. OAuth 2.0 with scopes is an industry best practice for protecting APIs, and moving to this model in API v2 ensures only the right parts of your data can be accessed or changed by a given client.
General (High‑Level) Changes
Rate Limits and Throttling
-
Specific Rate Limits: In API v2, each user (or integration) within an organization is subject to strict rate limits. The default limits are 400 read requests per minute (up to 20,000 per day) and 100 write requests per minute (up to 5,000 per day) per user/integration per organization. These limits ensure no single client overwhelms the system while still allowing substantial throughput for typical use cases. If a client exceeds these thresholds, the API will respond with HTTP 429 Too Many Requests errors, indicating the need to slow down or pause requests.
-
Independent Read/Write Tracking: Rate limiting is tracked separately for
read
andwrite
operations. In other words, consuming your full allotment of read requests does not reduce the number of writes you can perform, and vice versa. This separation means heavy read usage won’t throttle critical write actions (or vice versa). For example, a spike inGET
requests will not impact the allowed number ofPOST/PUT/DELETE
calls for that same user in the given time window. -
Granular limit tracking – Per User/Integration Per Org: The above limits apply on a per-user (or per-integration) per-organization basis. “Per organization” means that if a user belongs to multiple organizations, the rate counts are isolated to each org’s context. Similarly, an integration (a third-party app or API token) is treated as its own “user” with its own counter for each organization. This granularity ensures that one organization’s traffic spike won’t affect another’s, preventing a single user account from consuming another user’s quota. Each combination of {user/integration + organization} has its own independent throttle counters.
-
Exceptions for Special Use Cases: We recognize that certain partners or power users might have legitimate use cases that require higher throughput. Exceptions to the rate limits can be granted case-by-case when justified by specific use cases. For example, a technology partner performing large data synchronizations may request a higher cap. To request an exception, you typically need to contact our support or engineering team with details about your use case and justification. Each request is reviewed, and if approved, a custom rate limit may be applied for your organization or integration.
(In practice, sustained requests for higher limits will undergo careful evaluation, and you should seek approval before exceeding the standard limits to avoid disruption.) -
Best Practices Under Throttling: Clients should implement exponential backoff or graceful error handling for 429 responses. The API provides rate limit info in response headers (e.g., remaining requests and reset time) to help you monitor usage. Design your integration to spread out bursts and stay within the per-minute limits. If you anticipate a need to approach the daily limit (20k reads or 5k writes), consider batching data or requesting a rate limit exception in advance. By adhering to these limits and best practices, you ensure reliable service for your integration and others sharing the platform.
Pagination in API v2
In API v2, we introduced cursor-based pagination. This new approach provides more consistent performance and results across all listing endpoints. Each page is identified by a cursor token. Cursor-based pagination improves performance (especially on large datasets) and stability of results, and it offers a unified paging experience across endpoints
Why Cursor-Based Pagination?
-
Performance: Cursor-based pagination uses an indexed position (cursor) to directly fetch the next set of items, avoiding large skips. This means lower latency and less load on the database when listing resources.
-
Consistency: Cursors ensure stable ordering – even if new items arrive, your next page continues exactly after the last item you saw, avoiding overlaps or gaps.
-
Consistency Across Endpoints: All list endpoints in API v2 now use the same cursor mechanism. This uniformity simplifies integration: once you implement cursor pagination for one endpoint, the pattern is the same everywhere (no more mixing page numbers on one endpoint and cursors on another).
-
Developer Experience: By loading a maximum of 50 items per request (the default in v2), responses are kept fast and manageable. Developers have predictable data loads and fine control over page size via the
$limit
parameter, without worrying about inadvertently requesting huge offsets or datasets in one call.
Using the New Pagination Parameters
All v2 collection endpoints now accept cursor-based query parameters:
-
$limit
- Optional. The number of items to return in the response. The default (and maximum) is 50 items per request. If you do not specify$limit
, the API will return 50 items by default. (Any value above 50 will be capped at 50.) -
$cursorNext
- Optional.A cursor token indicating the page of results after a given item. Use this to retrieve the next page. For the first page of results, omit this parameter (or set it to null) to start from the beginning. -
$cursorPrev
- Optional. A cursor token for the page of results before the given item. This is used to go one page backward (to the previous page), if needed.
These $cursorNext
and $cursorPrev
tokens are opaque identifiers generated by the server. Your code should treat them as read-only tokens: don’t try to decode or manipulate them.
When you make a request without any cursor, you’ll get the first page of data along with a cursorNext
token in the response (if more data is available). Conversely, if you’re at the end of a list, the cursorNext
in the response will be null
. The same goes for cursorPrev
at the beginning of a list.
Always use the tokens provided by the latest response to navigate – they may expire or become invalid if reused later, so fetch fresh cursors as you go.
Response Format and Pagination Metadata
The response body for list endpoints includes pagination metadata alongside the returned items. The exact format may vary by endpoint, but generally, you will see something like this structure:
{
"results": [ … list of 50 transaction objects … ],
"cursorNext": "eyJpZCI6IjUwIiwgIm9mZnNldCI6IjUwIn0=",
"cursorPrev": null,
"rangeEnd": 50,
"rangeStart": 1
}
In the example above, the results array contains up to 50 results. The cursorNext field holds a token you can use in the next request to get the following page of results. cursorPrev is null here, indicating this response was the first page (no previous page exists). On later pages, cursorPrev would contain a token to go backwards, and cursorNext would have a token (or be null if it was the last page).
Some endpoints may wrap this data in a top-level object or a pagination field – refer to the specific endpoint documentation in the technical spec for exact response shapes. Regardless of format, the presence of cursorNext and cursorPrev in the response gives you all the information needed to implement a “Next” or “Previous” button in your integration.
Example: Paginating Through Results
Suppose you want to retrieve all invoices using the new pagination. In the old v1 API, to retrieve the second page of invoices, you might have done something like:
GET /api/v1/organizations/{orgSlug/payments
In v2, you will do the following:
- Fetch the first page: Make a request without any cursor (and optionally specify a limit, up to 50). For example:
> GET /api/v2/invoices?$limit=50
The API will return the first 50 invoices in an array, plus a cursorNext
token in the JSON response (and cursorPrev
will be null
).
- Fetch the next page: Take the
cursorNext
value from the first response, and use it in the next request. For example, if the first response returned"cursorNext": "abc123..."
, your next call would be:
> GET /api/v2/invoices?$limit=50&$cursorNext=abc123...
This retrieves the next 50 invoices. The response will include a new cursorNext
(for page 3) and also a cursorPrev
(which can be used to go back to page 1 if needed).
- Repeat as needed: Continue this process until you reach a response where
cursorNext
isnull
, meaning you’ve reached the end of the list. Similarly, you can usecursorPrev
to page backward if your application needs a “Previous” page button.
Each page request is independent: you don’t need to maintain any offset count yourself – just use the provided cursors. By adopting this approach, your integration benefits from efficient, predictable paging. The new system ensures you won’t accidentally skip or duplicate records due to intervening data changes, and it keeps the data retrieval snappy by never returning more than 50 items at once.
Filtering Capabilities
API v2 introduces robust filtering features with a standardized query structure. In API v2, each endpoint defines a Filter DTO (Data Transfer Object) that outlines which fields can be filtered and how. These DTOs enforce consistency in queries, ensuring clients use only allowed fields (generally those indexed in the database for efficiency). By restricting filters to indexed fields, the API avoids expensive full table scans and significantly improves query performance.
Supported filter operators: API v2 supports a concise set of operators for building queries, closely mirroring common database query syntax. You can use equality (implicitly, by specifying field=value
), and the following operators within a filter JSON or query parameter structure:
-
$gt
- greater than (>
) -
$gte
- greater than or equal (>=
) -
$lt
- less than (<
) -
$lte
- less than or equal (<=
) -
$in
- value is in an array of allowed values
These operators let you construct rich queries (e.g., numeric ranges, exclusions, multi-value matches) while keeping the format standardized. Multiple filters are combined with a logical AND
by default, meaning a returned record must meet all conditions (this is the typical case for query parameters in REST).
For example, to retrieve members who have been created after 1st of January 2025, and have an "active" status, you could do:
GET /api/v2/organizations/{orgSlug}/members?createdAt[$gte]=2025-01-01T00:00:00.000Z&status=active
You can also mix operators on the same field to create ranges. For example, to fetch members created between the first of January 2025 and the first of February 2025 (inclusive of the first day of January, exclusive of the first day of February):
GET /api/v2/organizations/{orgSlug}/members?createdAt[$gte]=2025-01-01T00:00:00.000Z&createdAt[$lt]=2025-02-01T00:00:00.000Z
To filter on multiple values for a field, use the $in
operator. For example, if a member
resource has a status field, you can find members that are either "active" or "contact" with:
GET /api/v2/organizations/{orgSlug}/members?status[$in]=active,contact
(Depending on the API, multiple $in
values might be provided as a comma-separated list or an array in JSON. Check the v2 docs for the exact syntax.) This query returns orders whose status is either "active" or "contact". Under the hood, the API will interpret the list and match any of the values.
Filtering limitations and rules: API v2 imposes some important limitations to keep queries performant and unambiguous:
-
Maximum of 50 values for
$in
: When using the$in
operator, you can provide up to 50 values in the list. Supplying more than 50 values will result in an error (or will be ignored) in v2. This limitation is in place because extremely large$in
lists can degrade database performance or even hit query parser limits. By capping it at 50, the API ensures the query remains efficient and avoids undue load (In practice, needing more than 50 exact matches might indicate a need for a different querying strategy or a dedicated endpoint for that use case.) -
Mutually exclusive operators on the same field: You cannot use certain operator combinations together on one field. In particular,
$gt
and$gte
cannot be applied to the same field in a single filter (since it wouldn’t make sense to have both “greater than” and “greater than or equal” on the same field simultaneously). Similarly, you can’t combine$lt
and$lte
on the same field. If both were provided, one would contradict or override the other, so v2 will reject such a query.
The rule is to use one type of comparison per field: one of$gt
/$gte
(for the lower bound if needed), and one of$lt
/$lte
(for the upper bound if needed), along with$in
if it applies. For example, instead of trying to docreatedAt[$gt]=2025-01-01T00:00:00.000Z&createdAt[$gte]=2025-01-01T00:00:00.000Z
(which is invalid), decide on the proper range – e.g.,createdAt[$gte]=2025-01-01T00:00:00.000Z&createdAt[$lt]=2024-12-30T00:00:00.000Z
. These constraints make the API’s behavior predictable and the filtering logic simpler to optimize. -
No conflicting conditions: Building on the above, v2’s filter DTOs generally ensure you don’t provide conflicting conditions. For example, using
$in
on a field usually would exclude using$ne
on the same field in the same query (since $in already specifies a set of acceptable values). The API will typically interpret multiple conditions on the same field as an AND combination, so if they conflict (no value can satisfy both), the result will just be empty, but it’s better to avoid such queries altogether. The DTO schema helps here by clearly defining what’s allowed.
Examples of valid filters:
- Get all members who were created after 2024:
GET /api/v2/organizations/{orgSlug}/members?createdAt[$gte]=2024-01-01T00:00:00.000Z
This finds members that are created after January 1, 2024.
- Get any members with status active or contact that were created in 2025:
GET /api/v2/organizations/{orgSlug}/members?status[$in]=active,contact&createdAt[$gte]=2024-01-01T00:00:00.000Z
These filtering capabilities in v2 are designed with performance and reliability in mind. By limiting filters to indexed fields and sane operator usage, the system can run queries using database indexes (which are very fast for lookups).
The operator restrictions (like not mixing $gt
and $gte
) prevent ambiguous queries and ensure the database can use a single range or equality condition per field, which is easier to optimize. Likewise, capping the $in
array length avoids giant queries that could slow down the system.
In summary, the v2 filtering approach gives developers flexibility to query data in useful ways while putting guardrails that protect the service from misuse. These constraints ultimately help maintain fast response times and stable service for everyone by avoiding the pathological cases that were possible in v1.
OAuth 2.0 and Granular Security Scopes
Another major upgrade in API v2 is the adoption of OAuth 2.0 for authentication and the introduction of granular security scopes. In v2, we use access tokens obtained via an OAuth flow. These tokens are both more secure and flexible: each carries specific scopes that determine what parts of the API it can access. This change aligns our API with industry best practices for security and gives developers fine-grained control over permissions.
Why OAuth 2.0 Replaces the Old Token Mechanism
-
Industry-Standard Security: OAuth 2.0 is a well-vetted standard used by many platforms (Stripe, Google, Twilio, etc.) for delegating access. By switching to OAuth, we leverage a robust, battle-tested protocol with built-in support for token expiration, refresh tokens, and user consent flows.
-
Short-Lived Tokens: In v2, access tokens are typically short-lived (for example, a one-hour lifetime) and can be rotated or revoked. This limits the window of exposure if a token is compromised. Your integration securely fetches a fresh token when needed.
-
Improved Developer Experience: OAuth 2.0 integrates with existing libraries and workflows. Developers can use standard OAuth libraries to obtain and refresh tokens, rather than dealing with proprietary authentication schemes. Additionally, the OAuth 2.0 mechanism supports user consent screens and third-party integrations more seamlessly, which is important if you’re building an app that needs a user’s permission to access their data on our platform.
-
Migration Note: The previous security scopes (
officernd.api.read
,officernd.api.write
) will no longer work on v2 endpoints. If you try to call a v2 API with an old scope, you will receive an authentication error. To migrate, you'll need to adjust the scopes of your current OAuth client and use it to request access tokens as described below.
NB: API v1 Security Scopes Deprecation Notice
Adding security scopes for API v1 has been disabled by default to encourage migration to API v2. However, there are certain scenarios where API v1 scopes may still be required — for example, when creating a new Zap through the OfficeRnD Zapier integration, which currently relies on API v1.
If you need to enable API v1 scopes for a specific application, please contact the OfficeRnD support team to request access. These cases will be reviewed and approved as exceptions during the transition period.
Scope-Based Access Control
With OAuth 2.0, every access token is associated with one or more scopes. Scopes define the exact permissions the token grants, adhering to the principle of least-privilege access. Instead of an all-or-nothing scope, you now request only the scopes your application needs. This minimizes security risks and gives users and integrators confidence that your app only has the permissions it requires.
Our scopes are structured in a hierarchical, descriptive manner, organized by product module. The general format is:
<product>.<category>.<resource>.<permission>
For our platform (codenamed "flex"), scopes begin with flex followed by the domain of functionality. For example:
-
flex.billing.payments.read
- allows read-only access to the Payments module in the Billing category. -
flex.billing.payments.create
- allows creating payments in the Billing category. -
flex.users.profile.update
- (hypothetical example) might allow editing user profile information in the Users category.
Each endpoint in the API is protected by one or more required scopes. When a client calls that endpoint, the token they provide must include the necessary scope. For instance, an endpoint that returns a list of payments might require flex.billing.payments.read
. Another that creates a new payment might require flex.billing.payments.write
. This granular scope system ensures that even if a token is leaked, it grants access only to a limited set of actions (unlike a full-access token in v1).
Internally, our API uses these scopes to guard endpoints. We’ve implemented custom authorization decorators (e.g., an @AuthGuard
in our code) on each route to automatically check that the incoming request’s token contains the proper scopes. If the token is missing a required scope, the request is rejected with a 401 Unauthorized or 403 Forbidden error. This happens transparently – as a developer using the API, you just need to make sure your token has the right scopes when calling each endpoint.
Obtaining and Using an OAuth 2.0 Token
Getting a Token: To use API v2, you first need to obtain an OAuth 2.0 access token from our authorization server. The exact steps depend on your use case:
- Client Credentials Flow (Server-to-Server): If you are building a backend integration (no user login, just your service authenticating to our API), you can use the Client Credentials flow. Use your client ID/secret to request a token directly. For example, a token request might look like:
POST https://identity.officernd.com/oauth/token
Content-Type: application/x-www-form-urlencoded
grant_type=client_credentials&
client_id=<YOUR_CLIENT_ID>&
client_secret=<YOUR_CLIENT_SECRET>&
scope=flex.billing.payments.read%20flex.billing.payments.create
This asks for a token with the flex.billing.payments.read
and flex.billing.payments.create
scopes. (Note: scopes in the request are space-separated or URL-encoded with %20
as shown.) The response will contain an access_token
(a JWT or opaque token string) and an expires_in
(lifetime in seconds), among other fields.
Once you have an access token, use it in the Authorization header for every API call to v2 endpoints:
Authorization: Bearer <ACCESS_TOKEN>
Using Scopes Effectively: Make sure to request only the scopes your application truly needs. For example, if your integration only reads data and never writes, request the .read scopes and omit the .create, .update, or .delete scopes. If you need to access only billing-related endpoints, you don’t need scopes for other modules like user management. This not only adheres to best practices but can also make the authorization step easier for users to accept (they will see your app asking for a limited set of permissions). You can always request additional scopes later if your app’s needs grow, but adding scopes will typically require re-authorization by the user to grant the new permissions.
Token Refresh: Depending on the OAuth flow, you may receive a refresh token along with your access token. A refresh token can be used to get a new access token without user intervention when the current one expires. This allows your integration to maintain long-term access without requiring the user to log in again. The presence of short-lived access tokens plus refresh tokens means even if an access token is stolen, it will soon expire, and the thief cannot refresh it without the refresh token (which you keep secure on your server). Always store your client secret and refresh tokens securely, and never expose them in client-side code.
Real-World Benefits of OAuth Scopes
The shift to OAuth 2.0 with scopes greatly enhances security for all parties:
-
For Developers: You have precise control over what your application can do. This modular permission system means you can build applications that only access certain data (for example, a billing dashboard that only needs billing read access) without risking exposure of other data. It also means you can leverage standard OAuth 2.0 libraries to handle the heavy lifting of authentication.
-
For End Users or Clients: They can confidently authorize third-party applications, knowing that each token is limited. For instance, if a partner app is integrating to pull payment history, the end user can grant it
flex.billing.payments.read
scope only. The app cannot (and will not be allowed to) perform other actions like writing data or accessing unrelated modules. This fosters trust and aligns with compliance requirements by enforcing the least-privilege principle. -
For Our Platform: It allows us to open up more functionality via API while still protecting sensitive operations. We can create new scopes for new features, and third-party integrators can opt into those without getting carte blanche access. Internally, the use of scoped tokens and our
@AuthGuard
checks means every request is verified for permission, significantly reducing the risk of accidental or malicious access to data that wasn’t intended to be exposed.
Migration Tips for OAuth 2.0
To transition to the new OAuth 2.0 authentication in API v2, follow these steps:
-
Create an OAuth Client: Log in to the developer portal (or use the Developer API, if available) to register a new OAuth 2.0 client application. You’ll obtain a Client ID and Client Secret. Configure the redirect URI if you plan to use the authorization code flow.
-
Decide on Scopes: Determine which API operations your app needs, and find the corresponding scopes in our documentation. For initial testing, you might start with broad read access to verify everything (
flex.{category}.{resource}.read
across modules), but for production use, narrow this down. -
Implement the OAuth Flow: Update your application code:
- For backend service-to-service calls, implement a client credentials token request to fetch an access token at startup (and periodically refresh it).
-
Update API Calls: Remove any usage of the old API key or token in your HTTP client configurations. Instead, after obtaining the OAuth access token, attach it to all API v2 requests in the
Authorization: Bearer
header. Ensure your HTTP client can handle refreshing the token when it expires (for example, catch a 401 Unauthorized response, then use the refresh token to get a new access token, and retry the request). -
Testing: Test your updated integration thoroughly. Try calling an endpoint without the required scope to see that you receive a forbidden error, then adjust to include the proper scope and verify the call succeeds. This will give you confidence that you haven’t missed any needed permissions. Also, test token expiration and refresh logic if applicable.
-
Gradual Rollout: If you have existing users or API consumers, roll out the new OAuth-based authentication gradually. You might run v1 and v2 in parallel during a transition period. Ensure that any stored old tokens are replaced with new OAuth tokens before entirely switching over.
By completing the above steps, you’ll have a secure integration with API v2’s OAuth 2.0 system. While the initial setup is a bit more involved than using a single API key, the payoff is significant: enhanced security, better alignment with third-party integration standards, and the ability to leverage fine-grained scopes for a tailored integration.
In the spirit of platforms like Shopify or Google’s APIs, our move to OAuth 2.0 with scopes is designed to support a growing ecosystem of applications in a safe and controlled manner.
Deprecation of $populate
$populate
In API v1, the $populate
parameter was used to automatically include related resources' data in a single request. This means clients could ask the server to join or embed related entities on the fly. For example, a v1 client might request /api/v1/organizaitions/{orgSLug}/members?$populate=team
to get each member along with the full company record embedded, instead of just a company ID reference. This was convenient for retrieving complex objects in one go, but it came at a cost.
In API v2, $populate
has been deprecated and removed in favor of a leaner, more secure approach.
What did $populate
do in v1?
It essentially allowed eager-loading of relationships. When you added $populate
(sometimes with a field name or wildcard), the API would fetch the related data from the other collection or service and merge it into the response. By default, an API response in v1 would only contain the resource’s fields (e.g., a member’s properties, including maybe a company ID). With $populate
, you could ask the API to replace that ID with the full company object, or include it under a sub-key. This behavior is analogous to a SQL JOIN or a MongoDB populate – the server does the work of gathering all related info.
In short, $populate
lets the client retrieve as much related information as possible in one request. This was useful for reducing the number of API calls a client had to make. For instance, v1 might allow:
GET /api/v1/organizations/{orgSlug}/members?$populate=team
to not only return the member data but also a team
field containing the company data. Under the hood, the server would look up the company by ID and attach its details.
Why was $populate
deprecated in v2?
The removal of $populate
in API v2 was driven by multiple factors:
Performance
Populating related data can dramatically slow down a request and increase server load. Each population is essentially an extra query (or a more complex join) that the server must perform. In v1, liberal use of $populate
could turn a single request into many database lookups. This could overwhelm the system, especially if clients requested deep or multiple nested populates.
Such heavy requests impact the requesting client and can also degrade overall API responsiveness for others. By deprecating $populate
, v2 avoids these “slow query” pitfalls. The v2 philosophy is to keep each request focused and lightweight, which in turn keeps the API speedy and scalable.
As noted in the Strapi framework (which had a similar populate concept), not auto-populating by default “saves bandwidth from the database queries” that would otherwise be needed. Essentially, not populating means less data transferred and less work per request, which is a win for performance.
Security and Access Control
With $populate
, the API had to be very careful about what data gets exposed. In v1, it was possible that a poorly configured population could expose fields or related records that the client wouldn’t normally have permission to see. Each relationship might have its own permissions. The server had to enforce those on the fly for each population. This added complexity and risk. For example, suppose an order has an associated user account, and the API has certain privacy rules for user data. If a client populates the user, the API must ensure that only permissible user fields are included.
In v1, managing these checks across many possible population combinations was error-prone. In Strapi’s context (just as an illustration), you “can’t populate [a] relation unless you enable the find permission on [that related] collection type”, which shows that extra permissions logic is needed for populating.
By removing $populate
, v2 reduces these risks: each endpoint can be designed to return only the data it should, and there’s no sudden exposure of a different model’s data via a query parameter. If a client needs related data, they must explicitly call that other endpoint, where normal auth checks apply. This separation makes it easier to ensure proper authorization and to avoid inadvertently leaking data.
Maintainability & Complexity
$populate
introduced a lot of complexity in the API’s implementation and usage. From the server-side perspective, every endpoint needed to handle potentially arbitrary combinations of populating child data. This made the code harder to maintain, as developers had to consider many edge cases (What if a populated record is deleted? What if there are circular relations? How do we merge fields? etc.). It was also a challenge to optimize, sometimes requiring custom caching or batching mechanisms that complicate the codebase.
By deprecating $populate
, the API v2 code is simpler: each endpoint mostly concerns itself with its primary resource. This improves maintainability and makes future changes or optimizations to that endpoint more straightforward, since there’s no need to account for embedded foreign objects on every request.
From the client-side perspective, relying on $populate
could lead to inconsistent data handling. The shape of the API response could differ depending on query parameters (populated vs not), which made client code more complex (e.g., if company is an ID vs if company is an object). Removing population leads to a more uniform API contract — clients know that, for example, a team field will always be an ID (or always a separate sub-resource link), not sometimes an expanded object. This consistency can simplify client logic and reduce bugs.
What to do instead of using $populate
?
In API v2, the recommended approach is to use additional endpoints or follow-up queries to retrieve related data. The API follows a more RESTful, modular design now. If you have a member and you need the company info, you will first request the member (e.g., GET /api/v2/organizations/{orgSlug}/members/123
), which returns something like:
{
"_id": 123,
"company": 456,
// other order fields
}
Then, to get the company details, you would use the /api/v2/organizations/{orgSlug}/companies/456
endpoint in a separate request. This two-step retrieval might seem less convenient than a single populated call, but it has significant benefits: each call is lighter and easier to cache, and you only fetch what you actually need. In many cases, clients might not need the full company object every time. By not automatically populating, the API avoids wasting resources. Modern client applications can perform these requests in parallel if needed, and with HTTP/2 and HTTP/3, multiple small requests incur minimal overhead.
Furthermore, splitting data across endpoints is aligned with good API design. As one API design guide notes, “splitting up API data into multiple endpoints that can be grabbed if needed is really handy”, allowing you to navigate relationships as needed rather than getting everything in one blob. It encourages a clear separation of concerns and leverages the idea of hypermedia or resource linking (each resource has its own URL). For example, a member’s response might include a link or reference to the company, which the client can follow. This is similar to how web pages work (linking to others) and is a stable, scalable approach.
Are there any exceptions?
In rare cases, the API v2 team may choose to internally populate or join data for a specific endpoint, but this is done in a controlled manner and not via a general $populate
parameter. For instance, if there’s a critical use case to display a summary that includes a few fields from a related entity (and doing two requests would be too slow for that scenario), the API might return a composite response. These cases will be explicitly documented and handled behind the scenes. As a client developer, you should not expect or rely on the arbitrary population of every relation. Whenever you don’t see a piece of related info in a v2 response, plan to call the appropriate endpoint to get it.
How does the deprecation of $populate
improve the API?
Removing $populate
leads to a cleaner separation of data and a more predictable API behavior. It encourages clients to only fetch the data they need.
The new model is easier to maintain and less error-prone; the API team can optimize each endpoint individually (for example, caching the common heavy data on the server, if necessary, or scaling out a specific service) without worrying about the ripple effects of populating others. It also plays nicer with API evolution – changes to a related resource’s schema are less likely to break the main resource, because they are loosely coupled (only linked by an ID).
Overall, while migrating to v2 might require some adjustments (making a couple more requests in places you previously used $populate
), it results in a more scalable and reliable system. Clients have the opportunity to cache responses for sub-resources and reuse them, and the server can serve more requests with the same resources because each request is lighter.
Recommended client adaptations
If you were using $populate
in v1, audit those instances and update your client code to fetch related data explicitly. For list endpoints, you might now get only IDs or minimal info for related objects. Use those IDs to call detail endpoints. If network round-trips are a concern, consider that the trade-off was intentionally made for the reasons above. You can mitigate perceived latency by making parallel requests or pre-fetching data you know you’ll need (for example, if showing a list of members, you might pre-fetch the company info for all companies that appear in the list).
Deprecating $populate
was a conscious decision to favor performance, security, and clarity over the convenience of single-call data aggregation. While it requires some relearning for those used to v1, it ultimately leads to a better API design. Embracing the v2 approach will yield more maintainable client code and a more robust interaction with the API. Your applications may see faster response times (since each call is doing less work) and fewer surprises in the data.
As you migrate, take advantage of this change to implement or improve client caching and reduce unnecessary data fetching. In the long run, both the API and its consumers benefit from this cleaner separation of concerns. The API v2 team is confident that these changes improve the platform’s scalability and maintainability, setting a solid foundation for future features and enhancements. Enjoy the more streamlined, predictable data flow in v2, and happy coding!
All these changes combine to make API v2 a vast improvement over v1 for client developers. We know that migrating from v1 to v2 will require updates to your code, and that breaking changes are never fun, but we want to reassure you that this upgrade is worth the effort. By embracing modern standards and cleaning up legacy quirks now, we’ve ensured that the OfficeRnD API will be more stable for the long haul.
In other words, the breaking changes in v2 are a one-time investment for future stability. This new version sets us up to add features more smoothly (with fewer breaking changes down the road) and to provide a more reliable service to you. We’re confident that as you migrate to API v2, you’ll quickly feel the benefits in terms of faster responses, easier integration, and peace of mind with enhanced security.
In the rest of this migration guide, we’ll walk you through everything you must know to make the switch. We’ll compare v1 and v2 endpoints, highlight what’s changed, and give you practical tips for updating your integration.
Our goal is to make the migration as clear and straightforward as possible, so you can start enjoying the improvements of API v2 with minimal hassle. Welcome to the new API – we’re excited to see what you build with it!
Updated 1 day ago