I’m looking for a solution to a problem I’m facing while designing a production workflow.
My company recently implemented an ERP system to manage orders for our print shop (large format and digital printing). The ERP includes a price calculator similar to Switch Client, with order information such as dimensions, finishing, quantities, etc.
The issue is that the JSON output from the ERP does not include any file inside it — it only provides the JSON content with links or metadata for processing production files.
So I used the API to retrieve the JSON, I map all the data I need, and then I create a folder structure (for example: CO-2601-0001/SOURCE). Inside this SOURCE folder I place the JSON file, and later I manually drop the client’s PDF in the same folder.
This is where my problem appears:
I’m unable to monitor the inside of that folder so that when the PDF arrives, the workflow starts automatically. I don’t think I’m the first one with this kind of issue — does anyone have a solution or workaround for this?
If you need more details or clarification, feel free to ask.
Thanks in advance for your help!
How to automatically trigger a Switch flow when JSON and PDF arrive at different times
Re: How to automatically trigger a Switch flow when JSON and PDF arrive at different times
With the assemble job element you can set it to recognize a common value like the order number. And then you can set it to inject when it have two file with the same identifier (ordernumber). Those two file will then be sent to the outgoing connection in a folder.
https://www0.enfocus.com/en/appstore/pr ... semble-job
https://www0.enfocus.com/en/appstore/pr ... semble-job
Re: How to automatically trigger a Switch flow when JSON and PDF arrive at different times
When there is a considerable time delay between the arrival of the data file and the metadata files the most efficient approach is the following. In your case the metadata comes first, but it can also be the other way around.
You park the JSON file in some location. You are already doing that. Then you do not drop the PDF file in the location where the JSON is located, but you drop it in a generic location (or you use a submit point), and you inject the JSON file into the flow using "Inject job" or the app "Inject wildcard" (the app is more flexible and is likely to be the better choice here). You then use "JSON pickup" to attach the JSON to the data file as metadata. The one thing that is required here is that the data file has to have some information (in its name for example) that allows you to inject the matching JSON file. If the name of the data file does not have that information you can use a submit point where you can enter the job ID. If you need it for archiving purposes you can also send the data file in two directions: into the rest of the flow for processing and into the same location as where the JSON is stored.
What is also common, and you could investigate if that is an option for you, is that the metadata contains a link to download the data file. You can then use "JSON pickup - Metadata is asset" and use the link in the JSON as a variable in "HTTP request" to download the data file.
You park the JSON file in some location. You are already doing that. Then you do not drop the PDF file in the location where the JSON is located, but you drop it in a generic location (or you use a submit point), and you inject the JSON file into the flow using "Inject job" or the app "Inject wildcard" (the app is more flexible and is likely to be the better choice here). You then use "JSON pickup" to attach the JSON to the data file as metadata. The one thing that is required here is that the data file has to have some information (in its name for example) that allows you to inject the matching JSON file. If the name of the data file does not have that information you can use a submit point where you can enter the job ID. If you need it for archiving purposes you can also send the data file in two directions: into the rest of the flow for processing and into the same location as where the JSON is stored.
What is also common, and you could investigate if that is an option for you, is that the metadata contains a link to download the data file. You can then use "JSON pickup - Metadata is asset" and use the link in the JSON as a variable in "HTTP request" to download the data file.