For TNPSC
Filter By Category :
For native workload automation features (dependency management, SLA tracking, visual pipelines), you would typically wrap FileCatalyst commands into a dedicated workload automation platform like , using FileCatalyst as the file movement plugin. filecatalyst workload automation
def run_fta(local, remote, server, user, pw): cmd = ["fta-cli", "--server", server, "--username", user, "--password", pw, "--put", local, "--target", remote] result = subprocess.run(cmd, capture_output=True) return result.returncode == 0
def main(): files_to_send = ["/data/file1.bin", "/data/file2.bin"] for f in files_to_send: # Pre-processing: compute hash with open(f, "rb") as fp: original_hash = hashlib.sha256(fp.read()).hexdigest() headers = "X-API-Key": API_KEY resp = requests
Since FileCatalyst itself is primarily a high-speed file transfer solution (using UDP acceleration), it does not have a native "Workload Automation" engine built into its core. Instead, automation is achieved through its , REST API , and Hotfolders .
headers = "X-API-Key": API_KEY resp = requests.post(f"API_BASE/transfer", json=payload, headers=headers) transfer_id = resp.json()["id"] pw): cmd = ["fta-cli"
fta-cli --log-level DEBUG --log-file /var/log/fc_workload.log --put file.dat
# Send 10 files in parallel ls /data/to_send/*.dat | xargs -P 10 -I {} fta-cli --put {} --target /remote/ Check file hash before transfer.
fta-cli --server hostname --port 21 --username user --password pass \ --put /local/file.txt --target /remote/destination/