greptilian logo

IRC log for #rest, 2015-03-10

https://trygvis.io/rest-wiki/

| Channels | #rest index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary

All times shown according to UTC.

Time S Nick Message
00:30 shrink0r joined #rest
01:10 shrink0r_ joined #rest
02:37 shrink0r joined #rest
03:04 lemur joined #rest
03:41 lemur joined #rest
03:58 vanHoesel joined #rest
04:23 ewalti joined #rest
04:24 ewalti joined #rest
05:20 ewalti joined #rest
06:42 ewalti joined #rest
07:03 _ollie joined #rest
08:26 azr joined #rest
09:31 fumanchu_ joined #rest
09:44 Left_Turn joined #rest
10:12 shrink0r joined #rest
10:33 vanHoesel joined #rest
10:39 interop_madness joined #rest
10:51 shrink0r joined #rest
10:56 azr joined #rest
11:39 mezod joined #rest
12:48 azr joined #rest
12:58 azr_ joined #rest
13:04 shrink0r joined #rest
13:19 dEPy joined #rest
13:28 vanHoesel joined #rest
13:40 azr joined #rest
13:55 nkoza joined #rest
14:03 azr joined #rest
15:24 ewalti joined #rest
15:33 vanHoesel joined #rest
15:48 vanHoesel joined #rest
16:14 lemur joined #rest
16:42 shrink0r joined #rest
17:05 shrink0r_ joined #rest
17:17 SupaHam joined #rest
17:29 SupaHam joined #rest
17:46 jsys joined #rest
17:48 jsys So you have a bunch of resources with various authorization requirements. You can't make it a cross-cutting concern and wrap everything in a proxy that checks for permissions, because every single verb of every single resource requires different permissions. So how do you make sure you don't forget to test about permissions
17:52 asdf` isn't that implementation-specific?
18:04 jsys asdf`: it is; but it's a problem all services have :P
18:06 asdf` then to answer your question, how do *i* make sure i don't forget: by having a test suite
18:06 asdf` (which is of course somewhat naive because it means i assume whoever wrote the test suite was wiser than the implementor)
18:07 jsys asdf`: It's like we can never prove our code is correct, and there can be a vulnerability hiding in plain sight everywhere :P it's kind of unsettling
18:08 asdf` you can hire auditors
18:08 whartung you accept that the more fine grained your security, the more difficult it is to test, but you test it anyway
18:13 jsys I have the same problem doing everything so I guess I'm a bit OCD.
18:13 jsys :D
18:26 azer_ joined #rest
19:01 ewalti joined #rest
20:03 graste joined #rest
20:03 saml joined #rest
20:12 begriffs joined #rest
21:13 mgomezch joined #rest
21:44 azer_ joined #rest
22:06 * pdurbin co-presented how we have woven permissions into our command pattern architecture: http://iqss.github.io/javaone2014-bof5619
22:28 azer__ joined #rest
22:31 fuzzyhorns joined #rest
22:36 jsys joined #rest
22:39 jsys How'd you model a resource which gets created, and then upon being read, it destroys itself.
22:40 pdurbin like it's a job in a queue?
22:41 whartung I'd use a whiteboard
22:43 pdurbin and your brain
22:44 whartung but, simply, I'd do a DELETE that returned the resouce representation as its payload
22:45 jsys whartung: shouldn't that be idempotent
22:46 whartung it is idempotent. The next time you call it, you get 404
22:46 whartung actually I doubt delete is idempotent, just by it's nature
22:46 whartung but it is, ostensibly, safe to send more than once -- like an elevator button
22:46 whartung but that's the behavior you wanted
22:47 jsys whartung: yeah. Only a very specific case of delete is idempotent. I don't know what they were thinking in that spec.
22:47 pdurbin it's safe to keep washing the potentially dirty plate over and over
22:47 whartung yea
22:48 jsys pdurbin: until you run out of soap
22:48 pdurbin eep!
22:52 whartung then you use json...
22:53 jsys "It's ok to use POST"
22:53 jsys I'll just POST, screw it, we're doing it live!
22:53 whartung POST would be the other option
23:05 jsys You know what would be cool. If HTTP was defined so one request could request many resources
23:05 jsys If you think about it, this is kind of a flaw with the spec in the first place.
23:06 jsys Good old N + 1 problem.
23:17 jsys http://cdn.meme.am/instances/500x/60094391.jpg
23:24 whartung sevearl folks have batching things they're using.
23:24 jsys whartung: how do they work - windowing requests?
23:24 fumanchu joined #rest
23:24 whartung basiaclly, just gang them up
23:25 jsys whartung: to gang them up in you need to wait a bit, have a window of gathering messages and then splitting them in batches
23:25 whartung that's an implementation detail
23:25 jsys it will work, but it's just kinda stupid to be having to infer a batch request implicitly
23:25 jsys It's not an implementation detail whartung, when it adds complexity and adds *latency*
23:26 jsys Any windowing algorithm would have a latency of at least the window size
23:26 jsys You also would need HTTP2 to avoid the multi-request/response overhead
23:26 whartung that's only if the windowing is transparent, there's no suggestion that all folks using batching are using it transparently.
23:26 jsys Granted HTTP2 fixes things a bit.
23:27 jsys whartung: not sure what you mean
23:27 whartung you don't need that whoe windowing thing if you're explicitly batching. You simply build up your batch, and you send it.
23:27 jsys what you build it from what
23:27 whartung GET /this; GET /that; GET /other -- GO
23:27 jsys and where are those GETs coming from?
23:28 whartung from the client
23:28 jsys If you're collecting GETs you're not responding to them. Ergo, windowing, ergo, latency
23:29 * jsys will use ergo more frequently going forward
23:29 whartung I think you're adding complexity that isn't necessarily there.
23:29 jsys whartung: when a GET arrives you can choose to respond immediately, or you can choose to wait and respond later in batch. That "later" part means latency
23:29 jsys whartung: so what isn't there
23:30 whartung the CLIENT is making a single request with the entire batch.
23:30 jsys whartung: you can request only one resource at a time with HTTP
23:30 whartung it understands the semantic of what it's asking. It's not making "3 calls", it's making one call
23:30 whartung POST /bathprocessor "GET /this; GET /that; GET /other"
23:30 whartung *batchprocessor
23:31 whartung single request
23:31 jsys whartung: there goes the cache in proxies and the client
23:31 whartung it's a POST, it's likely not cached anyway
23:31 jsys whartung: "don't use POST to fetch a cacheable resource"
23:31 whartung obviously an difficulty when trying to batch requests
23:31 jsys whartung: I'm not saying people shouldn't do it, heck I tunnel everything through post.
23:31 jsys But it's not REST if you do it that way
23:32 jsys Because the proxies see one opaque bundle
23:32 jsys Can't cache anything
23:32 whartung and in REST you're allowed to make requests with opaque bundles that don't cache anything. You just shouldn't.
23:32 jsys whartung: if HTTP actually explicitly supported batching GET it'd be totally different
23:32 shrink0r joined #rest
23:33 jsys whartung: and you can at least give me that one :P Cause you know I'm right
23:33 jsys I suppose they didn't think about it because designing for decades ahead in isolation from the world... is kinda nuts.
23:34 whartung I don't dream of what "If HTTP did…", I work with what I have.
23:34 jsys whartung: that's commendable, and exactly what I do. But we're in a channel called "REST"
23:36 whartung which actually has nothing to do with HTTP, but we won't go there....
23:37 jsys whartung: don't worry I know where you'd go ;)
23:38 jsys whartung: with some fixes HTTP could be a much better protocol to do REST upon.
23:38 jsys whartung: but it's ironic you can design a protocol better at REST than HTTP is (which REST was derived from)
23:42 jsys whartung: check it out, FTP is better at REST than HTTP is:   MGET, MPUT
23:42 jsys Jeez.

| Channels | #rest index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary

https://trygvis.io/rest-wiki/