I'm not sure I understand. Are you saying you've developed a scriptable framework that sits over the tops of an existing API? Is the Google File API the workhorse in this case? Are you able to shed some light on how you actually apply load to the filesystems below... For example, I had this same problem in comparing alternative vendor solutions for a SAN, in the sense of establishing a credible benchmark for comparison. Is this what your framework resolves ie. the point of comparison against many different file systems? Would be keen to find out more.
This is not a scriptable framework that sits on the top of Google File API. Instead it leverages the fact that Google's File API is generic to handle all kinds of file systems inside Google. Hence all the file operations in our system uses the Google's File API.
The load applied to the file system is in the form of the config file, samples of which have been mentioned in the post.
This tool is indeed being used to benchmark performance numbers, which can be used to compare different file systems.
A great example of a decoupled but cohesive design.
Also it's refreshing to see that you went with a simple text layout for the config files. All too often these days I see people using the XML hammer to crack a little nut like simple config files.
Ok Rajat, thanks for that. I think I understand. The linkage I was missing is how a config file actually triggers the load to occur. I'm assuming the Google File API does this work in this case. I also assume the API is proprietary.
In my situation I've had to rely on 3rd party tools such as iometer to apply load to a filesystem. And this is where I've had difficulty achieving parity in results, in terms of comparison. For example if I was to apply load to 2 different vendor solutions for a SAN (both with differing architectures), I've found it difficult to 'compare' benchmark results, as each vendor can argue that the manner in which load is applied differs. I think because you have a single API this makes your comparison easier.
What caught my attention with your post is that you're comparing results for distributed file systems (implying different architectures) and that you'd somehow found a way to provide comparable results across all. I think what I'm reading here is that the G file API is what is making that comparison possible, and that your config files have just simplified in effect the orchestration of a load test.
Hi Rajat Jain , I was just curious to know what n all tool(s), does Google use for its Functional testing of different products ?? Analogous to QTP,QC,TD etc ??
I'm not sure I understand. Are you saying you've developed a scriptable framework that sits over the tops of an existing API? Is the Google File API the workhorse in this case? Are you able to shed some light on how you actually apply load to the filesystems below... For example, I had this same problem in comparing alternative vendor solutions for a SAN, in the sense of establishing a credible benchmark for comparison. Is this what your framework resolves ie. the point of comparison against many different file systems? Would be keen to find out more.
ReplyDeleteRegards,
Tim Koopmans
90kts.com
This is not a scriptable framework that sits on the top of Google File API. Instead it leverages the fact that Google's File API is generic to handle all kinds of file systems inside Google. Hence all the file operations in our system uses the Google's File API.
ReplyDeleteThe load applied to the file system is in the form of the config file, samples of which have been mentioned in the post.
This tool is indeed being used to benchmark performance numbers, which can be used to compare different file systems.
A great example of a decoupled but cohesive design.
ReplyDeleteAlso it's refreshing to see that you went with a simple text layout for the config files. All too often these days I see people using the XML hammer to crack a little nut like simple config files.
Ok Rajat, thanks for that. I think I understand.
ReplyDeleteThe linkage I was missing is how a config file actually triggers the load to occur. I'm assuming the Google File API does this work in this case. I also assume the API is proprietary.
In my situation I've had to rely on 3rd party tools such as iometer to apply load to a filesystem. And this is where I've had difficulty achieving parity in results, in terms of comparison. For example if I was to apply load to 2 different vendor solutions for a SAN (both with differing architectures), I've found it difficult to 'compare' benchmark results, as each vendor can argue that the manner in which load is applied differs. I think because you have a single API this makes your comparison easier.
What caught my attention with your post is that you're comparing results for distributed file systems (implying different architectures) and that you'd somehow found a way to provide comparable results across all. I think what I'm reading here is that the G file API is what is making that comparison possible, and that your config files have just simplified in effect the orchestration of a load test.
Thanks for sharing.
Tim Koopmans
90kts.com
Hi Rajat Jain , I was just curious to know what n all tool(s), does Google use for its Functional testing of different products ?? Analogous to QTP,QC,TD etc ??
ReplyDelete