[Labs-admin] Fwd: Re: Servers with GPUs
Yuvi Panda
yuvipanda at gmail.com
Thu Feb 23 20:47:24 UTC 2017
Thinking more about it, I think the bigger question is - is cloud
going to be a service team or a product team? Not sure if that's
standard terminology but...
A service team is primarily driven by what other teams / groups of
people want and we're technologically in a space to provide. The
roadmap and priorities are driven by people external to the team.
Workflow is often 'X people want Y, we are able to do Y, so let us do
Y'.
A product team on the other hand, has a more focused vision of what it
wants to do that is intrinstic to the team. It might collaborate with
others to achieve its goals, but priority setting comes from inside
the team.
When I was in the mobile team, we treated them like a service team
('us: hey do this for us!') while they thought of themselves as a
product team ('them: this is what we want to do, not
what-everyone-else-asks-of-us'). This caused a lot of issues. From
talking to people, there is similar confusion about ops team as well -
some teams think we are there to do things for them, which part of ops
agree with and parts disagree with - causing problems...
I'm sure someone else can talk about this far more eloquently than I
:D But I think we should explicitly decide at some point...
On Thu, Feb 23, 2017 at 11:58 AM, Andrew Bogott <abogott at wikimedia.org> wrote:
> On 2/23/17 1:54 PM, Yuvi Panda wrote:
>>
>> I agree that it's a good fit for openstack / nova, but a really bad
>> fit for the labs and/or cloud teams :D
>
> If there are multiple people inside and outside of the WMF that want this
> kind of VM, then it's a good fit for our team. There certainly are
> interesting things that you can do with a GPU that you can't do with a
> 2-core VM. If it's just Adam and Halfak then they should buy themselves a
> couple of servers :)
>
> -A
>
>
>
>>
>> On Thu, Feb 23, 2017 at 11:53 AM, Andrew Bogott <abogott at wikimedia.org>
>> wrote:
>>>
>>> On 2/23/17 1:48 PM, Bryan Davis wrote:
>>>>
>>>> On Thu, Feb 23, 2017 at 11:33 AM, Andrew Bogott <abogott at wikimedia.org>
>>>> wrote:
>>>>>
>>>>> On 2/23/17 12:30 PM, Yuvi Panda wrote:
>>>>>
>>>>> Do we have the human bandwidth to commit to doing this as a team? GPUs
>>>>> are fickle beasts.
>>>>>
>>>>> Noooo! I'm not telling them 'we will do this' only 'this might be
>>>>> technically possible'
>>>>
>>>> This seems like a bad fit for Cloud. Providing co-location for
>>>> specialty hardware for a specific project is a lot of burden for us
>>>> and really doesn't give them any practice for eventual implementation
>>>> in production.
>>>
>>> Nova definitely /does/ support having different types of virt nodes, and
>>> scheduling certain VMs on certain hardware. So, having virtnode with
>>> GPUs
>>> could just be an additional VM type offered by the cloud.
>>>
>>> As to how those VMs actually /talk/ to the GPUs... in theory this is a
>>> feature implemented in Nova but I haven't investigated it at all.
>>>
>>>> Bryan
>>>
>>>
>>>
>>
>>
>
--
Yuvi Panda T
http://yuvi.in/blog
More information about the Labs-admin
mailing list