by keyle 4 days ago

   I just need something that can do S3 and is reliable and not slow.
Oh, simply that.

I'm a simple man, I just need edge delivered cdn content that never fails and responds within 20ms.

thayne 3 days ago | [-7 more]

I don't think that is what they are looking for. They just want something with an s3 compatible API they can run on their local network or maybe even on the same host.

bonesss 3 days ago | [-2 more]

So, why not write to a shared wrapper/facade?

If you split the interaction API out to an interface detailing actual program interaction with the service, then write an s3 backend and an FS backend. Then you could plug in any backend as desired and write agnostic application code.

Personally I end up there anyways testing and specifying the third party failure modes.

thayne 3 days ago | [-1 more]

What if you need it because you are using a third party application that requires an s3 api? Or you want to test your code that interacts with an s3 API?

bonesss 2 days ago | [-0 more]

Same answer, wrap you usage and use the wrapper for agonistic testing & coding.

That’s the entire point: my domain code ‘stores’ something, my implementation decides if that is S3 or a fixed kind of error or writes to a disc.

If my domain uses a third party app that needs an S3 API, and we see not pointing it at S3, then I am writing an s3 API to test that dependency and ensure the third party code works regardless. Normally I’d call that “their problem” and call their service with the same wrapper as above.

syabro 3 days ago | [-3 more]

what's the point then? Just api around FS?

mrweasel 3 days ago | [-0 more]

For a lot of project that would be sufficient. I've worked on projects that "required" an S3 storage solution. Not because it actually did, but because it needed some sort of object/file storage which could be accesses from somewhere, might be a Java application running in JBoss, might be a SpringBoot application in a container, on Kubernetes, Nomad or just on a VM.

Like it or not, S3 has become the de facto API for object storage for many developers. From the operations side of things, managing files is easier and already taken care of by your storage solution, be it a SAN, NAS or something entirely different. Being able to backup and manage whatever is stored in S3 with your existing setup is direct saving.

If you actually use a large subset of S3s features this might not be good solution, but in my experience you have a few buckets and a few limited ACLs and that's it.

pythonaut_16 2 days ago | [-0 more]

I use a local garagefs on my NAS for small/new side projects, and it’s on my Tailscale for easy access

- Lets me deploy stateless containers easily

- Let’s me leverage the NAS for local redundancy and a more centralized place to do backups

- When a project grows it’s easy to promote it to use a hosted S3

- Local S3 becomes a target for Litestream and Restic

- Developing against the local fs and then handling file storage is a huge friction, unless I’m using something like Rails that already has a good abstraction

skrtskrt 3 days ago | [-0 more]

At this point S3 is an API spec more than a particular system. Plenty of things only work against the S3 API spec since the implementations have become such popular and relatively cheap and performant storage systems. It gives a nice limited surface area that doesn't allow you to do things that can get too complex or can vary too much across filesystems, etc.

jpfromlondon 3 days ago | [-0 more]

would you not just say "edge delivered content"?