Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve chat initial experience #885

Merged
merged 13 commits into from
Sep 22, 2024
Merged

Improve chat initial experience #885

merged 13 commits into from
Sep 22, 2024

Conversation

mdmohsin7
Copy link
Collaborator

@mdmohsin7 mdmohsin7 commented Sep 18, 2024

Related to #889

  • Initial message should include the initial default questions for the user to try, make them exciting to try.
  • Uses the name, to make it more user focused.
  • Clear chat option (where to show it?)
  • Lets make the progress when loading chat messages for the first time better
  • Let's also try loading chat once the app opens, instead of after a user press on chat and has to wait, do some caching too I guess
  • Chat memory detail navigation is better, + errors are handled (could not find any errors except for incorrect memory details showing up due to how the new approach has been structured which is fixed as well. Also made improvements, the memory only gets downloaded if it doesn't exist in the memory_provider. So the navigation should be quick in most cases)
memory_loading_in_chat.mp4
clearchat.mp4
messages_cache_and_sync.mp4

Summary by Entelligence.AI

Release Notes:

  • New Feature: Added the ability for users to clear their chat history, improving user control over data.
  • New Feature: Introduced an animated mini banner widget that enhances the UI experience.
  • Refactor: Optimized memory detail navigation and initial chat options for better performance and user interaction.
  • Refactor: Improved message loading and chat clearing behavior in the MessageProvider class.
  • Bug Fix: Adjusted logic in addMemoryWithDate method to correctly handle memory insertion based on date comparison.
  • Chore: Added a new backend endpoint to clear chat history and return initial message, enhancing server-side functionality.

Copy link

github-actions bot commented Sep 18, 2024

Image description Entelligence.AI

Walkthrough

This update enhances the chat experience by introducing a feature to clear chat history, improving initial chat options, and optimizing memory detail navigation. It also includes backend changes for batch deletion of messages and skipping deleted ones. The frontend sees updates in message display, addition of an animated banner widget, and modifications in memory handling.

Changes

Files Summary
app/lib/pages/chat/widgets/ai_message.dart
app/lib/pages/chat/widgets/animated_mini_banner.dart
Improved user interaction with better padding, text content, and an animated banner widget.
app/lib/backend/http/api/messages.dart
backend/database/chat.py
backend/routers/chat.py
Introduced functions to clear chat history, delete messages in batches, and skip deleted messages. Added a new endpoint to clear chat history.
app/lib/providers/message_provider.dart
app/lib/main.dart
Added a method to clear chat messages, fetch new messages from the server, and update internal state. Calls to these methods were added in the app's initialization.
app/lib/providers/memory_provider.dart Updated the method to add memory based on date comparison and adjusted the list of memories with dates.

"In the realm of code, where logic is king 👑,

We dance with variables, in loops we swing. 💃🕺

With each new feature, our hearts do sing 🎵,

For every update, joy it will bring. 😊"


Uplevel your code reviews with Entelligence.AI Pro

Entelligence.AI Pro

If you like this project, please support us by purchasing the Pro version. The Pro version has advanced context, superior noise reduction and several proprietary improvements compared to the open source version. Moreover, Entelligence.AI Pro is free for open source projects.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between fd4967a and a8e4ed8 commits.
Files selected (1)
  • app/lib/pages/chat/widgets/ai_message.dart (3)
Review comments generated (3)
  • Review: 2
  • LGTM: 1

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

@@ -256,7 +256,7 @@ class _AIMessageState extends State<AIMessage> {
_getInitialOption(BuildContext context, String optionText) {
return GestureDetector(
child: Container(
padding: const EdgeInsets.symmetric(horizontal: 12.0, vertical: 8),
padding: const EdgeInsets.symmetric(horizontal: 12.0, vertical: 10),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The padding for the container has been increased from 8 to 10. This is a minor UI change and doesn't seem to introduce any issues.

-        padding: const EdgeInsets.symmetric(horizontal: 12.0, vertical: 8),
+        padding: const EdgeInsets.symmetric(horizontal: 12.0, vertical: 10),

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between a8e4ed8 and a280ead commits.
Files selected (4)
  • app/lib/backend/http/api/messages.dart (1)
  • app/lib/providers/message_provider.dart (1)
  • backend/database/chat.py (1)
  • backend/routers/chat.py (1)
Files skipped from review due to trivial changes (1)
  • backend/routers/chat.py
Review comments generated (5)
  • Review: 3
  • LGTM: 2

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines 44 to 48
Future clearChat() async {
var mes = await clearChatServer();
messages = mes;
notifyListeners();
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The clearChat method is not handling potential errors that might occur during the API call to clearChatServer(). It's important to add error handling here to prevent the application from crashing or entering an inconsistent state if the server request fails. You can use a try-catch block to handle any exceptions that might be thrown.

  Future clearChat() async {
    try {
      var mes = await clearChatServer();
      messages = mes;
      notifyListeners();
    } catch (e) {
      // Handle or log the error as appropriate for your application.
      print('Error clearing chat: $e');
    }
  }

Comment on lines 85 to 101


def clear_chat(uid,batch_size):
user_ref = db.collection('users').document(uid)
messages_ref = user_ref.collection('messages')
if batch_size == 0:
return
docs = messages_ref.list_documents(page_size=batch_size)
deleted = 0

for doc in docs:
print(f"Deleting doc {doc.id} => {doc.get().to_dict()}")
doc.delete()
deleted = deleted + 1

if deleted >= batch_size:
return clear_chat(uid,batch_size)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The clear_chat function recursively deletes messages from the database. However, it's not a good practice to use recursion for this kind of operation because it can lead to a stack overflow if the number of recursive calls is too large. Instead, consider using a loop to delete the messages.

Also, the print statement on line 96 could potentially expose sensitive user data in logs. It's best to avoid logging sensitive information.

Here's a suggested change:

- def clear_chat(uid,batch_size):
-     user_ref = db.collection('users').document(uid)
-     messages_ref = user_ref.collection('messages')
-     if batch_size == 0:
-         return
-     docs = messages_ref.list_documents(page_size=batch_size)
-     deleted = 0
- 
-     for doc in docs:
-         print(f"Deleting doc {doc.id} => {doc.get().to_dict()}")
-         doc.delete()
-         deleted = deleted + 1
- 
-     if deleted >= batch_size:
-         return clear_chat(uid,batch_size)

+ def clear_chat(uid, batch_size):
+     user_ref = db.collection('users').document(uid)
+     messages_ref = user_ref.collection('messages')
+     while True:
+         docs = messages_ref.list_documents(page_size=batch_size)
+         deleted = 0
+         for doc in docs:
+             doc.delete()
+             deleted += 1
+         if deleted < batch_size:
+             break

Comment on lines +26 to +34
Future<List<ServerMessage>> clearChatServer() async {
var response = await makeApiCall(url: '${Env.apiBaseUrl}v1/clear-chat', headers: {}, method: 'DELETE', body: '');
if (response == null) throw Exception('Failed to delete chat');
if (response.statusCode == 200) {
return [ServerMessage.fromJson(jsonDecode(response.body))];
} else {
throw Exception('Failed to delete chat');
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The clearChatServer function is making an API call to clear the chat. However, it's not passing any user identification or authentication headers in the request. This could lead to security issues as anyone can clear the chat without proper authorization. Please ensure that you are passing the necessary headers for user identification and authorization.

- var response = await makeApiCall(url: '${Env.apiBaseUrl}v1/clear-chat', headers: {}, method: 'DELETE', body: '');
+ var response = await makeApiCall(url: '${Env.apiBaseUrl}v1/clear-chat', headers: {'Authorization': 'Bearer $userToken'}, method: 'DELETE', body: '');

Also, consider adding error handling for different types of HTTP status codes. Currently, if the status code is not 200, it throws a generic exception. It would be more informative to throw specific exceptions based on the status code.

- if (response.statusCode == 200) {
-   return [ServerMessage.fromJson(jsonDecode(response.body))];
- } else {
-   throw Exception('Failed to delete chat');
- }
+ switch (response.statusCode) {
+   case 200:
+     return [ServerMessage.fromJson(jsonDecode(response.body))];
+   case 401:
+     throw Exception('Unauthorized request');
+   case 404:
+     throw Exception('Chat not found');
+   default:
+     throw Exception('Failed to delete chat');
+ }

@mdmohsin7 mdmohsin7 marked this pull request as ready for review September 20, 2024 16:50
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between a280ead and 5250c23 commits.
Files selected (8)
  • app/lib/main.dart (1)
  • app/lib/pages/chat/page.dart (4)
  • app/lib/pages/chat/widgets/ai_message.dart (5)
  • app/lib/pages/chat/widgets/animated_mini_banner.dart (1)
  • app/lib/providers/memory_provider.dart (1)
  • app/lib/providers/message_provider.dart (2)
  • backend/database/chat.py (2)
  • backend/routers/chat.py (1)
Files not summarized due to errors (1)
  • app/lib/pages/chat/page.dart (diff tokens exceeds limit)
Review comments generated (18)
  • Review: 12
  • LGTM: 6

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines +277 to +280

if (context.read<AuthenticationProvider>().user != null) {
context.read<HomeProvider>().setupHasSpeakerProfile();
context.read<MessageProvider>().setMessagesFromCache();
context.read<MessageProvider>().refreshMessages();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The setMessagesFromCache() and refreshMessages() methods are called one after the other. If refreshMessages() is designed to fetch new messages from the server, it might overwrite the messages set by setMessagesFromCache(). Consider using await to ensure that setMessagesFromCache() completes before refreshMessages() starts. Also, error handling should be added for these operations.

-        context.read<MessageProvider>().setMessagesFromCache();
-        context.read<MessageProvider>().refreshMessages();
+        try {
+          await context.read<MessageProvider>().setMessagesFromCache();
+          await context.read<MessageProvider>().refreshMessages();
+        } catch (e) {
+          // Handle or log error
+        }

Please note that this suggestion assumes that both setMessagesFromCache() and refreshMessages() return a Future. If they don't, you may need to refactor them to do so.

Comment on lines 74 to 243
message: message,
sendMessage: _sendMessageUtil,
displayOptions: provider.messages.length <= 1,
pluginSender: plugins.firstWhereOrNull((e) => e.id == message.pluginId),
updateMemory: (ServerMemory memory) {
context.read<MemoryProvider>().updateMemory(memory);
},
)
: HumanMessage(message: message),
);
},
),
),
Consumer<HomeProvider>(builder: (context, home, child) {
return Align(
alignment: Alignment.bottomCenter,
child: Container(
width: double.maxFinite,
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 2),
margin: EdgeInsets.only(left: 32, right: 32, bottom: home.isChatFieldFocused ? 40 : 120),
decoration: const BoxDecoration(
color: Colors.black,
borderRadius: BorderRadius.all(Radius.circular(16)),
border: GradientBoxBorder(
gradient: LinearGradient(colors: [
Color.fromARGB(127, 208, 208, 208),
Color.fromARGB(127, 188, 99, 121),
Color.fromARGB(127, 86, 101, 182),
Color.fromARGB(127, 126, 190, 236)
]),
width: 1,
),
shape: BoxShape.rectangle,
),
child: TextField(
enabled: true,
controller: textController,
// textCapitalization: TextCapitalization.sentences,
obscureText: false,
focusNode: home.chatFieldFocusNode,
// canRequestFocus: true,
textAlign: TextAlign.start,
textAlignVertical: TextAlignVertical.center,
decoration: InputDecoration(
hintText: 'Ask your Friend anything',
hintStyle: const TextStyle(fontSize: 14.0, color: Colors.grey),
focusedBorder: InputBorder.none,
enabledBorder: InputBorder.none,
suffixIcon: IconButton(
splashColor: Colors.transparent,
splashRadius: 1,
onPressed: loading
? null
: () async {
String message = textController.text;
if (message.isEmpty) return;
if (connectivityProvider.isConnected) {
_sendMessageUtil(message);
} else {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text('Please check your internet connection and try again'),
duration: Duration(seconds: 2),
),
);
}
},
icon: loading
? const SizedBox(
width: 16,
height: 16,
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.white),
),
)
: const Icon(
Icons.send_rounded,
color: Color(0xFFF7F4F4),
size: 24.0,
),
),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
),
);
}),
],
);
}),
],
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This new hunk introduces a lot of changes to the UI and how it handles different states (loading messages, clearing chat, etc.). It's hard to pinpoint any specific issues without seeing the entire codebase, but overall, the logic seems sound. The use of Consumer2 to listen to changes in both MessageProvider and ConnectivityProvider is a good practice. However, this section of the code is quite large and complex. Consider breaking it down into smaller, more manageable widgets or functions to improve readability and maintainability.

Comment on lines +113 to +136
int addMemoryWithDate(ServerMemory memory) {
int idx;
var date = memoriesWithDates.indexWhere((element) =>
element is DateTime &&
element.day == memory.createdAt.day &&
element.month == memory.createdAt.month &&
element.year == memory.createdAt.year);
if (date != -1) {
var hour = memoriesWithDates[date + 1].createdAt.hour;
var newHour = memory.createdAt.hour;
if (newHour > hour) {
memoriesWithDates.insert(date + 1, memory);
idx = date + 1;
} else {
memoriesWithDates.insert(date + 2, memory);
idx = date + 2;
}
} else {
memoriesWithDates.add(memory.createdAt);
memoriesWithDates.add(memory);
idx = memoriesWithDates.length - 1;
}
notifyListeners();
return idx;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The addMemoryWithDate function seems to be handling the insertion of a memory based on its date. However, it assumes that memories are sorted in ascending order by hour within a day. If this is not the case, the function may insert a memory at an incorrect position.

To ensure correct behavior, you should sort the memories for a particular day by hour before inserting a new memory. Here's a possible way to do it:

-      var hour = memoriesWithDates[date + 1].createdAt.hour;
-      var newHour = memory.createdAt.hour;
-      if (newHour > hour) {
-        memoriesWithDates.insert(date + 1, memory);
-        idx = date + 1;
-      } else {
-        memoriesWithDates.insert(date + 2, memory);
-        idx = date + 2;
-      }
+      int insertIndex = date + 1;
+      while (insertIndex < memoriesWithDates.length && 
+             memoriesWithDates[insertIndex] is ServerMemory &&
+             (memoriesWithDates[insertIndex] as ServerMemory).createdAt.hour < memory.createdAt.hour) {
+        insertIndex++;
+      }
+      memoriesWithDates.insert(insertIndex, memory);
+      idx = insertIndex;

This change will find the correct position to insert the new memory by comparing its hour with existing memories' hours and stop when it finds a memory with a greater hour or reaches the end of the list.

Comment on lines +121 to +122
if (widget.displayOptions) const SizedBox(height: 8),
if (widget.displayOptions) ..._getInitialOptions(context),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The changes made here are good for improving user engagement. The initial options are now displayed based on the displayOptions flag, which is more flexible than the previous hard-coded condition.

Comment on lines +132 to +183
var memProvider = Provider.of<MemoryProvider>(context, listen: false);
var idx = memProvider.memoriesWithDates.indexWhere((e) {
if (e.runtimeType == ServerMemory) {
return e.id == data.$2.id;
}
return false;
});

ServerMemory? m = await getMemoryById(data.$2.id);
if (m == null) return;
MixpanelManager().chatMessageMemoryClicked(m);
setState(() => memoryDetailLoading[data.$1] = false);
await Navigator.of(context)
.push(MaterialPageRoute(builder: (c) => MemoryDetailPage(memory: m)));
if (SharedPreferencesUtil().modifiedMemoryDetails?.id == m.id) {
ServerMemory modifiedDetails = SharedPreferencesUtil().modifiedMemoryDetails!;
widget.updateMemory(SharedPreferencesUtil().modifiedMemoryDetails!);
var copy = List<MessageMemory>.from(widget.message.memories);
copy[data.$1] = MessageMemory(
modifiedDetails.id,
modifiedDetails.createdAt,
MessageMemoryStructured(
modifiedDetails.structured.title,
modifiedDetails.structured.emoji,
));
widget.message.memories.clear();
widget.message.memories.addAll(copy);
SharedPreferencesUtil().modifiedMemoryDetails = null;
setState(() {});
if (idx != -1) {
context.read<MemoryDetailProvider>().updateMemory(idx);
var m = memProvider.memoriesWithDates[idx];
MixpanelManager().chatMessageMemoryClicked(m);
await Navigator.of(context).push(
MaterialPageRoute(
builder: (c) => MemoryDetailPage(
memory: m,
),
),
);
} else {
if (memoryDetailLoading[data.$1]) return;
setState(() => memoryDetailLoading[data.$1] = true);
ServerMemory? m = await getMemoryById(data.$2.id);
if (m == null) return;
idx = memProvider.addMemoryWithDate(m);
MixpanelManager().chatMessageMemoryClicked(m);
setState(() => memoryDetailLoading[data.$1] = false);
context.read<MemoryDetailProvider>().updateMemory(idx);
await Navigator.of(context).push(
MaterialPageRoute(
builder: (c) => MemoryDetailPage(
memory: m,
),
),
);
//TODO: Not needed anymore I guess because memories are stored in provider and read from there only
if (SharedPreferencesUtil().modifiedMemoryDetails?.id == m.id) {
ServerMemory modifiedDetails = SharedPreferencesUtil().modifiedMemoryDetails!;
widget.updateMemory(SharedPreferencesUtil().modifiedMemoryDetails!);
var copy = List<MessageMemory>.from(widget.message.memories);
copy[data.$1] = MessageMemory(
modifiedDetails.id,
modifiedDetails.createdAt,
MessageMemoryStructured(
modifiedDetails.structured.title,
modifiedDetails.structured.emoji,
));
widget.message.memories.clear();
widget.message.memories.addAll(copy);
SharedPreferencesUtil().modifiedMemoryDetails = null;
setState(() {});
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This block of code has been significantly refactored to improve error handling and maintainability. It now checks if the memory already exists in the provider before fetching it from the server, which can improve performance by reducing unnecessary network calls. However, there's a potential issue with this approach: if the memory data on the server has changed since it was last fetched, the app will display outdated information. To fix this, you could add a timestamp to each memory and check if it's older than a certain threshold before deciding whether to fetch it again.

+                          var memProvider = Provider.of<MemoryProvider>(context, listen: false);
+                          var idx = memProvider.memoriesWithDates.indexWhere((e) {
+                            if (e.runtimeType == ServerMemory) {
+                              return e.id == data.$2.id;
+                            }
+                            return false;
+                          });
+
+                          if (idx != -1 && memProvider.memoriesWithDates[idx].timestamp < SOME_THRESHOLD) {
+                            // Fetch the memory again
+                          }

289:
The padding has been slightly increased, which should make the chat messages easier to read and interact with.

Comment on lines +306 to +310
_getInitialOption(context, 'What\'s been on my mind a lot?'),
const SizedBox(height: 8),
_getInitialOption(context, 'What conversations did I have with John?'),
_getInitialOption(context, 'Did I forget to follow up on something?'),
const SizedBox(height: 8),
_getInitialOption(context, 'What advise have I received about entrepreneurship?'),
_getInitialOption(context, 'What\'s the funniest thing I\'ve said lately?'),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The text content of the initial options has been updated to be more engaging and interesting for the user. This is a good change for improving user experience.

Comment on lines +58 to +61
@router.delete('/v1/clear-chat', tags=['chat'], response_model=Message)
def clear_chat(uid: str = Depends(auth.get_current_user_uid)):
chat_db.clear_chat(uid)
return initial_message_util(uid)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The new endpoint /v1/clear-chat is introduced to clear the chat history. However, there's no error handling in case chat_db.clear_chat(uid) or initial_message_util(uid) fails. It would be better to wrap these calls in a try-except block and handle potential exceptions appropriately.

@router.delete('/v1/clear-chat', tags=['chat'], response_model=Message)
def clear_chat(uid: str = Depends(auth.get_current_user_uid)):
+    try:
        chat_db.clear_chat(uid)
        return initial_message_util(uid)
+    except Exception as e:
+        # Log the exception and return an appropriate HTTP response
+        logging.error(f"Error while clearing chat for user {uid}: {str(e)}")
+        raise HTTPException(status_code=500, detail="Internal server error")

Comment on lines +13 to +30
bool hasCachedMessages = false;
bool isClearingChat = false;

String firstTimeLoadingText = '';

void updatePluginProvider(PluginProvider p) {
pluginProvider = p;
}

void setHasCachedMessages(bool value) {
hasCachedMessages = value;
notifyListeners();
}

void setClearingChat(bool value) {
isClearingChat = value;
notifyListeners();
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The new boolean flags hasCachedMessages and isClearingChat along with their setter methods are introduced. These flags help to manage the state of message loading and chat clearing, which is a good practice for state management in Flutter. However, it's important to ensure that these flags are used consistently throughout the codebase to avoid any potential inconsistencies or bugs.

+  bool hasCachedMessages = false;
+  bool isClearingChat = false;

+  void setHasCachedMessages(bool value) {
+    hasCachedMessages = value;
+    notifyListeners();
+  }

+  void setClearingChat(bool value) {
+    isClearingChat = value;
+    notifyListeners();
+  }

Comment on lines +49 to +80
void setMessagesFromCache() {
if (SharedPreferencesUtil().cachedMessages.isNotEmpty) {
setHasCachedMessages(true);
messages = SharedPreferencesUtil().cachedMessages;
}
notifyListeners();
}

Future<List<ServerMessage>> getMessagesFromServer() async {
if (!hasCachedMessages) {
firstTimeLoadingText = 'Reading your memories...';
notifyListeners();
}
setLoadingMessages(true);
var mes = await getMessagesServer();
if (!hasCachedMessages) {
firstTimeLoadingText = 'Learning from your memories...';
notifyListeners();
}
messages = mes;
setLoadingMessages(false);
notifyListeners();
return messages;
}

Future clearChat() async {
setClearingChat(true);
var mes = await clearChatServer();
messages = mes;
setClearingChat(false);
notifyListeners();
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The methods setMessagesFromCache(), getMessagesFromServer(), and clearChat() have been added. These methods handle fetching messages from cache, getting messages from the server, and clearing the chat respectively. The use of SharedPreferencesUtil().cachedMessages for caching is a good approach to improve user experience by reducing load times. However, error handling seems to be missing in these async operations. It would be beneficial to add try-catch blocks around these operations to handle any potential exceptions and provide appropriate feedback to the user.

+  void setMessagesFromCache() {
+    if (SharedPreferencesUtil().cachedMessages.isNotEmpty) {
+      setHasCachedMessages(true);
+      messages = SharedPreferencesUtil().cachedMessages;
+    }
+    notifyListeners();
+  }

+  Future<List<ServerMessage>> getMessagesFromServer() async {
+    if (!hasCachedMessages) {
+      firstTimeLoadingText = 'Reading your memories...';
+      notifyListeners();
+    }
+    setLoadingMessages(true);
+    var mes = await getMessagesServer();
+    if (!hasCachedMessages) {
+      firstTimeLoadingText = 'Learning from your memories...';
+      notifyListeners();
+    }
+    messages = mes;
+    setLoadingMessages(false);
+    notifyListeners();
+    return messages;
+  }

+  Future clearChat() async {
+    setClearingChat(true);
+    var mes = await clearChatServer();
+    messages = mes;
+    setClearingChat(false);
+    notifyListeners();
+  }

Comment on lines 86 to 109


async def batch_delete_messages(parent_doc_ref, batch_size=450):
# batch size is 450 because firebase can perform upto 500 operations in a batch
messages_ref = parent_doc_ref.collection('messages')
while True:
docs = messages_ref.limit(batch_size).stream()
docs_list = list(docs)
if not docs_list:
break
batch = db.batch()
for doc in docs_list:
batch.delete(doc.reference)
batch.commit()


async def clear_chat( uid: str):
try:
user_ref = db.collection('users').document(uid)
if not user_ref.get().exists:
raise HTTPException(status_code=404, detail="User not found")
await batch_delete_messages(user_ref)
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error deleting messages: {str(e)}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The batch_delete_messages function is a good approach to delete messages in batches, considering Firebase's limit of 500 operations per batch. However, it's important to note that this function will continue deleting messages until there are none left. This could potentially lead to an infinite loop if there's an issue with the deletion process or if new messages are being added faster than they're being deleted.

In the clear_chat function, you're correctly checking if the user exists before attempting to delete their messages. However, the error handling could be improved. Currently, any exception raised during the deletion process will result in a 500 status code being returned. It would be more informative to differentiate between different types of errors and return appropriate status codes. For example, if the deletion fails due to insufficient permissions, a 403 status code would be more appropriate.

async def batch_delete_messages(parent_doc_ref, batch_size=450):
    # batch size is 450 because firebase can perform upto 500 operations in a batch
    messages_ref = parent_doc_ref.collection('messages')
    while True:
        docs = messages_ref.limit(batch_size).stream()
        docs_list = list(docs)
        if not docs_list:
            break
        batch = db.batch()
        for doc in docs_list:
            batch.delete(doc.reference)
        batch.commit()


async def clear_chat( uid: str):
    try:
        user_ref = db.collection('users').document(uid)
        if not user_ref.get().exists:
            raise HTTPException(status_code=404, detail="User not found")
        await batch_delete_messages(user_ref)
    except Exception as e:
        # Differentiate between different types of errors
+        if isinstance(e, PermissionError):
+            raise HTTPException(status_code=403, detail=f"Insufficient permissions: {str(e)}")
+        elif isinstance(e, SomeOtherError):
+            raise HTTPException(status_code=XXX, detail=f"Some other error: {str(e)}")
-        raise HTTPException(status_code=500, detail=f"Error deleting messages: {str(e)}")
\ No newline at end of file

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 5250c23 and f945d41 commits.
Files selected (1)
  • app/lib/pages/chat/page.dart (4)
Files not summarized due to errors (1)
  • app/lib/pages/chat/page.dart (diff tokens exceeds limit)
Review comments generated (4)
  • Review: 3
  • LGTM: 1

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

@@ -50,7 +51,7 @@ class ChatPageState extends State<ChatPage> with AutomaticKeepAliveClientMixin {
void initState() {
plugins = prefs.pluginsList;
SchedulerBinding.instance.addPostFrameCallback((_) async {
await context.read<MessageProvider>().refreshMessages();
// await context.read<MessageProvider>().refreshMessages();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The line await context.read<MessageProvider>().refreshMessages(); has been commented out. If this is intentional and you no longer want to refresh messages on initialization, it's better to remove the line entirely to avoid confusion.

- // await context.read<MessageProvider>().refreshMessages();

Comment on lines 74 to 245
message: message,
sendMessage: _sendMessageUtil,
displayOptions: provider.messages.length <= 1,
pluginSender: plugins.firstWhereOrNull((e) => e.id == message.pluginId),
updateMemory: (ServerMemory memory) {
context.read<MemoryProvider>().updateMemory(memory);
},
)
: HumanMessage(message: message),
);
},
),
),
Consumer<HomeProvider>(builder: (context, home, child) {
return Align(
alignment: Alignment.bottomCenter,
child: Container(
width: double.maxFinite,
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 2),
margin: EdgeInsets.only(left: 32, right: 32, bottom: home.isChatFieldFocused ? 40 : 120),
decoration: const BoxDecoration(
color: Colors.black,
borderRadius: BorderRadius.all(Radius.circular(16)),
border: GradientBoxBorder(
gradient: LinearGradient(colors: [
Color.fromARGB(127, 208, 208, 208),
Color.fromARGB(127, 188, 99, 121),
Color.fromARGB(127, 86, 101, 182),
Color.fromARGB(127, 126, 190, 236)
]),
width: 1,
),
shape: BoxShape.rectangle,
),
child: TextField(
enabled: true,
controller: textController,
// textCapitalization: TextCapitalization.sentences,
obscureText: false,
focusNode: home.chatFieldFocusNode,
// canRequestFocus: true,
textAlign: TextAlign.start,
textAlignVertical: TextAlignVertical.center,
decoration: InputDecoration(
hintText: 'Ask your Friend anything',
hintStyle: const TextStyle(fontSize: 14.0, color: Colors.grey),
focusedBorder: InputBorder.none,
enabledBorder: InputBorder.none,
suffixIcon: IconButton(
splashColor: Colors.transparent,
splashRadius: 1,
onPressed: loading
? null
: () async {
String message = textController.text;
if (message.isEmpty) return;
if (connectivityProvider.isConnected) {
_sendMessageUtil(message);
} else {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text('Please check your internet connection and try again'),
duration: Duration(seconds: 2),
),
);
}
},
icon: loading
? const SizedBox(
width: 16,
height: 16,
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.white),
),
)
: const Icon(
Icons.send_rounded,
color: Color(0xFFF7F4F4),
size: 24.0,
),
),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
),
);
}),
],
);
}),
],
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The changes made in this section are quite extensive and involve a significant UI overhaul. The new code introduces an AnimatedMiniBanner widget for displaying loading status, adds more detailed loading and error states, and restructures the chat display logic. It seems like these changes would improve user experience by providing more feedback during operations and handling edge cases more gracefully. However, without seeing the implementation of AnimatedMiniBanner and other related methods, it's hard to provide a thorough review. Please ensure that all new widgets and methods are properly tested.

Comment on lines +271 to +273
if (mounted) {
context.read<MessageProvider>().addMessage(message);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This change adds a check to see if the widget is still in the tree before adding a message to the MessageProvider. This is a good practice as it prevents potential errors when trying to update the state of a widget that's no longer in the widget tree.

+ if (mounted) {
+   context.read<MessageProvider>().addMessage(message);
+ }


async def clear_chat( uid: str):
try:
user_ref = db.collection('users').document(uid)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't use db in a router

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't .delete, check the other delete in the code, is setting a flag {deleted: True}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't use db in a router

Yes it's in database directory only

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't .delete, check the other delete in the code, is setting a flag {deleted: True}

Got it

widget.message.memories.addAll(copy);
SharedPreferencesUtil().modifiedMemoryDetails = null;
setState(() {});
if (idx != -1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clean chat history should have confirmation button.

@josancamon19
Copy link
Contributor

The clear chat might appear top right or top left, when I start scrolling, and disappear when I'm at the bottom of the page

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between f945d41 and 47e8f15 commits.
Files selected (3)
  • app/lib/pages/chat/page.dart (5)
  • app/lib/pages/chat/widgets/animated_mini_banner.dart (1)
  • backend/database/chat.py (3)
Files not summarized due to errors (1)
  • app/lib/pages/chat/page.dart (diff tokens exceeds limit)
Review comments generated (9)
  • Review: 5
  • LGTM: 4

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines +57 to +78
scrollController = ScrollController();
scrollController.addListener(() {
if (scrollController.position.userScrollDirection == ScrollDirection.reverse) {
if (!isScrollingDown) {
isScrollingDown = true;
_showDeleteOption = true;
setState(() {});
}
}

if (scrollController.position.userScrollDirection == ScrollDirection.forward) {
if (isScrollingDown) {
isScrollingDown = false;
_showDeleteOption = false;
setState(() {});
}
}
});
SchedulerBinding.instance.addPostFrameCallback((_) async {
await context.read<MessageProvider>().refreshMessages();
scrollToBottom();
});
// _initDailySummary();
;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

There's an unnecessary semicolon at line 78. It should be removed to keep the code clean.

-    ;

Comment on lines +95 to +304
message: message,
sendMessage: _sendMessageUtil,
displayOptions: provider.messages.length <= 1,
pluginSender: plugins.firstWhereOrNull((e) => e.id == message.pluginId),
updateMemory: (ServerMemory memory) {
context.read<MemoryProvider>().updateMemory(memory);
},
)
: HumanMessage(message: message),
);
},
),
),
Consumer<HomeProvider>(builder: (context, home, child) {
return Align(
alignment: Alignment.bottomCenter,
child: Container(
width: double.maxFinite,
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 2),
margin: EdgeInsets.only(left: 32, right: 32, bottom: home.isChatFieldFocused ? 40 : 120),
decoration: const BoxDecoration(
color: Colors.black,
borderRadius: BorderRadius.all(Radius.circular(16)),
border: GradientBoxBorder(
gradient: LinearGradient(colors: [
Color.fromARGB(127, 208, 208, 208),
Color.fromARGB(127, 188, 99, 121),
Color.fromARGB(127, 86, 101, 182),
Color.fromARGB(127, 126, 190, 236)
]),
width: 1,
),
shape: BoxShape.rectangle,
),
child: TextField(
enabled: true,
controller: textController,
// textCapitalization: TextCapitalization.sentences,
obscureText: false,
focusNode: home.chatFieldFocusNode,
// canRequestFocus: true,
textAlign: TextAlign.start,
textAlignVertical: TextAlignVertical.center,
decoration: InputDecoration(
hintText: 'Ask your Friend anything',
hintStyle: const TextStyle(fontSize: 14.0, color: Colors.grey),
focusedBorder: InputBorder.none,
enabledBorder: InputBorder.none,
suffixIcon: IconButton(
splashColor: Colors.transparent,
splashRadius: 1,
onPressed: loading
? null
: () async {
String message = textController.text;
if (message.isEmpty) return;
if (connectivityProvider.isConnected) {
_sendMessageUtil(message);
} else {
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(
content: Text('Please check your internet connection and try again'),
duration: Duration(seconds: 2),
),
);
}
},
icon: loading
? const SizedBox(
width: 16,
height: 16,
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.white),
),
)
: const Icon(
Icons.send_rounded,
color: Color(0xFFF7F4F4),
size: 24.0,
),
),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
// maxLines: 8,
// minLines: 1,
// keyboardType: TextInputType.multiline,
style: TextStyle(fontSize: 14.0, color: Colors.grey.shade200),
),
),
);
}),
],
);
}),
],
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The changes made in this hunk are quite extensive and involve a lot of UI modifications. However, there is one potential issue with the onTap function for the 'Clear Chat' button. The state is being updated before the clearChat() function is awaited. This could potentially lead to inconsistencies in the UI if clearChat() fails or takes a long time to complete. I would suggest moving the setState call after the await statement.

                          onTap: () async {
                            await context.read<MessageProvider>().clearChat();
                            setState(() {
                              _showDeleteOption = false;
                            });
                          },

@@ -2,6 +2,7 @@
from datetime import datetime, timezone
from typing import Optional

from fastapi import HTTPException

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The import of HTTPException from fastapi is new in this hunk. However, it's not clear why this was added as there are no changes in the old hunk that would require this new import. If it's used in other parts of the code not shown in the hunks, then it's fine. Otherwise, unnecessary imports should be avoided to keep the code clean and maintainable.

- from fastapi import HTTPException

Comment on lines +64 to +65
if message.get('deleted') is True:
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This change introduces a check for a 'deleted' flag in each message. This is a good way to skip over deleted messages when fetching chat history. However, it assumes that every message has a 'deleted' field which might not be the case for older messages or messages created before this change was introduced. It would be safer to use the dict.get() method which will return None if the 'deleted' key does not exist.

- if message.get('deleted') is True:
+ if message.get('deleted', False) is True:

Comment on lines +92 to +115
async def batch_delete_messages(parent_doc_ref, batch_size=450):
# batch size is 450 because firebase can perform upto 500 operations in a batch
messages_ref = parent_doc_ref.collection('messages')
while True:
docs = messages_ref.limit(batch_size).stream()
docs_list = list(docs)
if not docs_list:
print("No more messages to delete")
break
batch = db.batch()
for doc in docs_list:
batch.update(doc.reference, {'deleted': True})
batch.commit()


async def clear_chat(uid: str):
try:
user_ref = db.collection('users').document(uid)
print(f"Deleting messages for user: {uid}")
if not user_ref.get().exists:
raise HTTPException(status_code=404, detail="User not found")
await batch_delete_messages(user_ref)
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error deleting messages: {str(e)}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This new function batch_delete_messages is a good addition for deleting messages in batches, which can be more efficient than deleting them one by one. The clear_chat function also seems to handle errors properly by raising an HTTPException with a status code and detail message. However, using print statements for logging purposes is not recommended. Consider using a proper logging framework like Python's built-in logging module. This allows for better control over what gets logged, where it gets logged, and in what format.

- print("No more messages to delete")
+ logging.info("No more messages to delete")

- print(f"Deleting messages for user: {uid}")
+ logging.info(f"Deleting messages for user: {uid}")

Also, it's important to note that the batch_delete_messages function marks messages as 'deleted' but doesn't actually remove them from the database. If this is the intended behavior, then it's fine. However, if you want to permanently delete these messages, you might need to adjust your approach.

@beastoin beastoin merged commit 0126160 into main Sep 22, 2024
1 check passed
@beastoin beastoin deleted the chat-improvements branch September 22, 2024 04:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants