[Labs-admin] Fwd: Re: Servers with GPUs
Andrew Bogott
abogott at wikimedia.org
Thu Feb 23 19:58:13 UTC 2017
On 2/23/17 1:54 PM, Yuvi Panda wrote:
> I agree that it's a good fit for openstack / nova, but a really bad
> fit for the labs and/or cloud teams :D
If there are multiple people inside and outside of the WMF that want
this kind of VM, then it's a good fit for our team. There certainly are
interesting things that you can do with a GPU that you can't do with a
2-core VM. If it's just Adam and Halfak then they should buy themselves
a couple of servers :)
-A
>
> On Thu, Feb 23, 2017 at 11:53 AM, Andrew Bogott <abogott at wikimedia.org> wrote:
>> On 2/23/17 1:48 PM, Bryan Davis wrote:
>>> On Thu, Feb 23, 2017 at 11:33 AM, Andrew Bogott <abogott at wikimedia.org>
>>> wrote:
>>>> On 2/23/17 12:30 PM, Yuvi Panda wrote:
>>>>
>>>> Do we have the human bandwidth to commit to doing this as a team? GPUs
>>>> are fickle beasts.
>>>>
>>>> Noooo! I'm not telling them 'we will do this' only 'this might be
>>>> technically possible'
>>> This seems like a bad fit for Cloud. Providing co-location for
>>> specialty hardware for a specific project is a lot of burden for us
>>> and really doesn't give them any practice for eventual implementation
>>> in production.
>> Nova definitely /does/ support having different types of virt nodes, and
>> scheduling certain VMs on certain hardware. So, having virtnode with GPUs
>> could just be an additional VM type offered by the cloud.
>>
>> As to how those VMs actually /talk/ to the GPUs... in theory this is a
>> feature implemented in Nova but I haven't investigated it at all.
>>
>>> Bryan
>>
>>
>
>
More information about the Labs-admin
mailing list