Using pipelines versus InferenceClient

Hello, I see I can use a conversational model either via pipelines or via the InferenceClient. I would like to know best practice and if there is a recommendation on what to use from these two. The third way (via transformers - model and tokenizer) is harder and I dont want to use that. Thank you!

Method 1 : Pipelines

This also indicates the model it defaults to

from transformers import pipeline, Conversation
converse = pipeline("conversational")

conversation_1 = Conversation("Going to the movies tonight - any suggestions?")
converse([conversation_1])

Method 2 : InferenceClient

from huggingface_hub import InferenceClient
client = InferenceClient()
output = client.conversational("Going to the movies tonight - any suggestions?")
response = output['generated_text']
print(response)

# This will work once a PR is merged
# https://github.com/huggingface/huggingface_hub/pull/1770
default_model = client.get_recommended_model()
print(default_model)