Issue description:
A custom plugin solution is required to control the number of child flows spawned based on a static thread count, rather than relying on the split size defined in the splitter.
Use Case:
The customer needs to limit the number of parallel executions to a fixed number, determined by a static variable (e.g., thread count), regardless of the input payload size. This allows better control over resource usage and thread management, especially in high-load scenarios.
Example Scenario:
- Runtime pods configuration: Minimum 2 pods
- Desired child concurrency per pod: 5
- Total concurrent child flows allowed: 2 pods × 5 = 10
With the current plugin behavior, child flows are executed based on a fixed split size, which leads to inefficient execution when record volume is high and causes child flows to queue unnecessarily.
Current Behavior Example:
- Split Size: 500
- Total Records: 10,000
- Result: 10,000 / 500 = 20 child flow executions
- Only 10 can run concurrently → remaining get queued → increased processing time
Expected Solution:
Allow the number of child flows to remain fixed (e.g., 10), and dynamically adjust the split size accordingly.
- Desired concurrent child flows: 10
- Total Records: 10,000
- Updated Split Size: 10,000 / 10 = 1,000
Solution provided:
A new variable has been introduced in the existing CP_DataSplitter custom plugin to determine the number of child flows to spawn. This variable overrides the default split size logic, dynamically adjusting the split count based on the static thread count and the total number of records.
Next Steps:
Please proceed by creating a copy of the existing CP_DataSplitter plugin and implementing the necessary changes in the duplicate version, ensuring the original remains unchanged.
Once tested and verified from your end, you may proceed to replace the original plugin with the updated version.
Changes to be implemented:
- Retrieve the total record count using:
- If the source schema is other than XML or JSON Schema, you can use the give code to get the total records count:
Service.<SourceLayoutActivityName>.OperationCount
- If the source schema is XML or JSON Schema, then you need to apply some additional logic to get the total records count. Lets suppose if you are getting the JSON via some API response then you may add the below lines of code to get the total records count:
// Sample JSON response (replace with actual API response)
String jsonResponse = "{"employees":[{"id":1,"name":"John"},{"id":2,"name":"Jane"}]}";
// Convert string to JSONObject
JSONObject jsonObject = new JSONObject(jsonResponse);
// Get the employee array
JSONArray employees = jsonObject.getJSONArray("employees");
// Get employee count
int employeeCount = employees.length();
- If the source schema is other than XML or JSON Schema, you can use the give code to get the total records count:
- Define a 'maxExecutions' variable in the Custom Plugin and set a default value, as below
3. Add the below lines of code in the Custom Plugin:
// To Calculate splitSize dynamically using
Integer getTotalRecordCount = Integer.parseInt((String) context.get("Service.SG_Text_Layout_Src.OperationCount"));
int maxExecutions = Integer.parseInt(service.getValueByName("maxExecutions"));
splitSize = (int) Math.ceil((double) getTotalRecordCount / maxExecutions);
context.put("splitSize", String.valueOf(splitSize));
// Update the queueSize assignment and comment out the existing line:
int queueSize = Integer.parseInt((String) context.get("splitSize"));
// int queueSize = Integer.parseInt(service.getValueByName("splitSize"));
4. Save the Custom Plugin
Attached the updated Custom Plugin code for the reference.
Comparison: Existing Plugin vs. Updated Plugin Code
- The existing plugin controlled executions based on the total record count by splitting the data using a predefined number of records per execution i.e. based on splitSize.
- The updated plugin instead works based on a fixed number of executions, dynamically calculating and updating the splitSize to evenly distribute records across those executions.
Comments
0 comments
Please sign in to leave a comment.